Nov 29 00:35:50 np0005539508 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 00:35:50 np0005539508 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 00:35:50 np0005539508 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 00:35:50 np0005539508 kernel: BIOS-provided physical RAM map:
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 00:35:50 np0005539508 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 00:35:50 np0005539508 kernel: NX (Execute Disable) protection: active
Nov 29 00:35:50 np0005539508 kernel: APIC: Static calls initialized
Nov 29 00:35:50 np0005539508 kernel: SMBIOS 2.8 present.
Nov 29 00:35:50 np0005539508 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 00:35:50 np0005539508 kernel: Hypervisor detected: KVM
Nov 29 00:35:50 np0005539508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 00:35:50 np0005539508 kernel: kvm-clock: using sched offset of 3254755171 cycles
Nov 29 00:35:50 np0005539508 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 00:35:50 np0005539508 kernel: tsc: Detected 2800.000 MHz processor
Nov 29 00:35:50 np0005539508 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 00:35:50 np0005539508 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 00:35:50 np0005539508 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 00:35:50 np0005539508 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 00:35:50 np0005539508 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 00:35:50 np0005539508 kernel: Using GB pages for direct mapping
Nov 29 00:35:50 np0005539508 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 00:35:50 np0005539508 kernel: ACPI: Early table checksum verification disabled
Nov 29 00:35:50 np0005539508 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 00:35:50 np0005539508 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 00:35:50 np0005539508 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 00:35:50 np0005539508 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 00:35:50 np0005539508 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 00:35:50 np0005539508 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 00:35:50 np0005539508 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 00:35:50 np0005539508 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 00:35:50 np0005539508 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 00:35:50 np0005539508 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 00:35:50 np0005539508 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 00:35:50 np0005539508 kernel: No NUMA configuration found
Nov 29 00:35:50 np0005539508 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 00:35:50 np0005539508 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 00:35:50 np0005539508 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 00:35:50 np0005539508 kernel: Zone ranges:
Nov 29 00:35:50 np0005539508 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 00:35:50 np0005539508 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 00:35:50 np0005539508 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 00:35:50 np0005539508 kernel:  Device   empty
Nov 29 00:35:50 np0005539508 kernel: Movable zone start for each node
Nov 29 00:35:50 np0005539508 kernel: Early memory node ranges
Nov 29 00:35:50 np0005539508 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 00:35:50 np0005539508 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 00:35:50 np0005539508 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 00:35:50 np0005539508 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 00:35:50 np0005539508 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 00:35:50 np0005539508 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 00:35:50 np0005539508 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 00:35:50 np0005539508 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 00:35:50 np0005539508 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 00:35:50 np0005539508 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 00:35:50 np0005539508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 00:35:50 np0005539508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 00:35:50 np0005539508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 00:35:50 np0005539508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 00:35:50 np0005539508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 00:35:50 np0005539508 kernel: TSC deadline timer available
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Max. logical packages:   8
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Max. logical dies:       8
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Max. dies per package:   1
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Max. threads per core:   1
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Num. cores per package:     1
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Num. threads per package:   1
Nov 29 00:35:50 np0005539508 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 00:35:50 np0005539508 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 00:35:50 np0005539508 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 00:35:50 np0005539508 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 00:35:50 np0005539508 kernel: Booting paravirtualized kernel on KVM
Nov 29 00:35:50 np0005539508 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 00:35:50 np0005539508 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 00:35:50 np0005539508 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 00:35:50 np0005539508 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 00:35:50 np0005539508 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 00:35:50 np0005539508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 00:35:50 np0005539508 kernel: random: crng init done
Nov 29 00:35:50 np0005539508 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: Fallback order for Node 0: 0 
Nov 29 00:35:50 np0005539508 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 00:35:50 np0005539508 kernel: Policy zone: Normal
Nov 29 00:35:50 np0005539508 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 00:35:50 np0005539508 kernel: software IO TLB: area num 8.
Nov 29 00:35:50 np0005539508 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 00:35:50 np0005539508 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 00:35:50 np0005539508 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 00:35:50 np0005539508 kernel: Dynamic Preempt: voluntary
Nov 29 00:35:50 np0005539508 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 00:35:50 np0005539508 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 00:35:50 np0005539508 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 00:35:50 np0005539508 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 00:35:50 np0005539508 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 00:35:50 np0005539508 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 00:35:50 np0005539508 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 00:35:50 np0005539508 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 00:35:50 np0005539508 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 00:35:50 np0005539508 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 00:35:50 np0005539508 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 00:35:50 np0005539508 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 00:35:50 np0005539508 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 00:35:50 np0005539508 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 00:35:50 np0005539508 kernel: Console: colour VGA+ 80x25
Nov 29 00:35:50 np0005539508 kernel: printk: console [ttyS0] enabled
Nov 29 00:35:50 np0005539508 kernel: ACPI: Core revision 20230331
Nov 29 00:35:50 np0005539508 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 00:35:50 np0005539508 kernel: x2apic enabled
Nov 29 00:35:50 np0005539508 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 00:35:50 np0005539508 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 00:35:50 np0005539508 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 29 00:35:50 np0005539508 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 00:35:50 np0005539508 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 00:35:50 np0005539508 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 00:35:50 np0005539508 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 00:35:50 np0005539508 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 00:35:50 np0005539508 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 00:35:50 np0005539508 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 00:35:50 np0005539508 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 00:35:50 np0005539508 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 00:35:50 np0005539508 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 00:35:50 np0005539508 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 00:35:50 np0005539508 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 00:35:50 np0005539508 kernel: x86/bugs: return thunk changed
Nov 29 00:35:50 np0005539508 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 00:35:50 np0005539508 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 00:35:50 np0005539508 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 00:35:50 np0005539508 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 00:35:50 np0005539508 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 00:35:50 np0005539508 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 00:35:50 np0005539508 kernel: Freeing SMP alternatives memory: 40K
Nov 29 00:35:50 np0005539508 kernel: pid_max: default: 32768 minimum: 301
Nov 29 00:35:50 np0005539508 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 00:35:50 np0005539508 kernel: landlock: Up and running.
Nov 29 00:35:50 np0005539508 kernel: Yama: becoming mindful.
Nov 29 00:35:50 np0005539508 kernel: SELinux:  Initializing.
Nov 29 00:35:50 np0005539508 kernel: LSM support for eBPF active
Nov 29 00:35:50 np0005539508 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 00:35:50 np0005539508 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 00:35:50 np0005539508 kernel: ... version:                0
Nov 29 00:35:50 np0005539508 kernel: ... bit width:              48
Nov 29 00:35:50 np0005539508 kernel: ... generic registers:      6
Nov 29 00:35:50 np0005539508 kernel: ... value mask:             0000ffffffffffff
Nov 29 00:35:50 np0005539508 kernel: ... max period:             00007fffffffffff
Nov 29 00:35:50 np0005539508 kernel: ... fixed-purpose events:   0
Nov 29 00:35:50 np0005539508 kernel: ... event mask:             000000000000003f
Nov 29 00:35:50 np0005539508 kernel: signal: max sigframe size: 1776
Nov 29 00:35:50 np0005539508 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 00:35:50 np0005539508 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 00:35:50 np0005539508 kernel: smp: Bringing up secondary CPUs ...
Nov 29 00:35:50 np0005539508 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 00:35:50 np0005539508 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 00:35:50 np0005539508 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 00:35:50 np0005539508 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 29 00:35:50 np0005539508 kernel: node 0 deferred pages initialised in 9ms
Nov 29 00:35:50 np0005539508 kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 29 00:35:50 np0005539508 kernel: devtmpfs: initialized
Nov 29 00:35:50 np0005539508 kernel: x86/mm: Memory block size: 128MB
Nov 29 00:35:50 np0005539508 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 00:35:50 np0005539508 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 00:35:50 np0005539508 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 00:35:50 np0005539508 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 00:35:50 np0005539508 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 00:35:50 np0005539508 kernel: audit: initializing netlink subsys (disabled)
Nov 29 00:35:50 np0005539508 kernel: audit: type=2000 audit(1764394549.049:1): state=initialized audit_enabled=0 res=1
Nov 29 00:35:50 np0005539508 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 00:35:50 np0005539508 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 00:35:50 np0005539508 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 00:35:50 np0005539508 kernel: cpuidle: using governor menu
Nov 29 00:35:50 np0005539508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 00:35:50 np0005539508 kernel: PCI: Using configuration type 1 for base access
Nov 29 00:35:50 np0005539508 kernel: PCI: Using configuration type 1 for extended access
Nov 29 00:35:50 np0005539508 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 00:35:50 np0005539508 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 00:35:50 np0005539508 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 00:35:50 np0005539508 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 00:35:50 np0005539508 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 00:35:50 np0005539508 kernel: Demotion targets for Node 0: null
Nov 29 00:35:50 np0005539508 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 00:35:50 np0005539508 kernel: ACPI: Added _OSI(Module Device)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 00:35:50 np0005539508 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 00:35:50 np0005539508 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 00:35:50 np0005539508 kernel: ACPI: Interpreter enabled
Nov 29 00:35:50 np0005539508 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 00:35:50 np0005539508 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 00:35:50 np0005539508 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 00:35:50 np0005539508 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 00:35:50 np0005539508 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 00:35:50 np0005539508 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [3] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [4] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [5] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [6] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [7] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [8] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [9] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [10] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [11] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [12] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [13] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [14] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [15] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [16] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [17] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [18] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [19] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [20] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [21] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [22] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [23] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [24] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [25] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [26] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [27] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [28] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [29] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [30] registered
Nov 29 00:35:50 np0005539508 kernel: acpiphp: Slot [31] registered
Nov 29 00:35:50 np0005539508 kernel: PCI host bridge to bus 0000:00
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 00:35:50 np0005539508 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 00:35:50 np0005539508 kernel: iommu: Default domain type: Translated
Nov 29 00:35:50 np0005539508 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 00:35:50 np0005539508 kernel: SCSI subsystem initialized
Nov 29 00:35:50 np0005539508 kernel: ACPI: bus type USB registered
Nov 29 00:35:50 np0005539508 kernel: usbcore: registered new interface driver usbfs
Nov 29 00:35:50 np0005539508 kernel: usbcore: registered new interface driver hub
Nov 29 00:35:50 np0005539508 kernel: usbcore: registered new device driver usb
Nov 29 00:35:50 np0005539508 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 00:35:50 np0005539508 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 00:35:50 np0005539508 kernel: PTP clock support registered
Nov 29 00:35:50 np0005539508 kernel: EDAC MC: Ver: 3.0.0
Nov 29 00:35:50 np0005539508 kernel: NetLabel: Initializing
Nov 29 00:35:50 np0005539508 kernel: NetLabel:  domain hash size = 128
Nov 29 00:35:50 np0005539508 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 00:35:50 np0005539508 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 00:35:50 np0005539508 kernel: PCI: Using ACPI for IRQ routing
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 00:35:50 np0005539508 kernel: vgaarb: loaded
Nov 29 00:35:50 np0005539508 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 00:35:50 np0005539508 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 00:35:50 np0005539508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 00:35:50 np0005539508 kernel: pnp: PnP ACPI init
Nov 29 00:35:50 np0005539508 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 00:35:50 np0005539508 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_INET protocol family
Nov 29 00:35:50 np0005539508 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 00:35:50 np0005539508 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_XDP protocol family
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 00:35:50 np0005539508 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 00:35:50 np0005539508 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 00:35:50 np0005539508 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77321 usecs
Nov 29 00:35:50 np0005539508 kernel: PCI: CLS 0 bytes, default 64
Nov 29 00:35:50 np0005539508 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 00:35:50 np0005539508 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 00:35:50 np0005539508 kernel: ACPI: bus type thunderbolt registered
Nov 29 00:35:50 np0005539508 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 00:35:50 np0005539508 kernel: Initialise system trusted keyrings
Nov 29 00:35:50 np0005539508 kernel: Key type blacklist registered
Nov 29 00:35:50 np0005539508 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 00:35:50 np0005539508 kernel: zbud: loaded
Nov 29 00:35:50 np0005539508 kernel: integrity: Platform Keyring initialized
Nov 29 00:35:50 np0005539508 kernel: integrity: Machine keyring initialized
Nov 29 00:35:50 np0005539508 kernel: Freeing initrd memory: 85868K
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_ALG protocol family
Nov 29 00:35:50 np0005539508 kernel: xor: automatically using best checksumming function   avx       
Nov 29 00:35:50 np0005539508 kernel: Key type asymmetric registered
Nov 29 00:35:50 np0005539508 kernel: Asymmetric key parser 'x509' registered
Nov 29 00:35:50 np0005539508 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 00:35:50 np0005539508 kernel: io scheduler mq-deadline registered
Nov 29 00:35:50 np0005539508 kernel: io scheduler kyber registered
Nov 29 00:35:50 np0005539508 kernel: io scheduler bfq registered
Nov 29 00:35:50 np0005539508 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 00:35:50 np0005539508 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 00:35:50 np0005539508 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 00:35:50 np0005539508 kernel: ACPI: button: Power Button [PWRF]
Nov 29 00:35:50 np0005539508 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 00:35:50 np0005539508 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 00:35:50 np0005539508 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 00:35:50 np0005539508 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 00:35:50 np0005539508 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 00:35:50 np0005539508 kernel: Non-volatile memory driver v1.3
Nov 29 00:35:50 np0005539508 kernel: rdac: device handler registered
Nov 29 00:35:50 np0005539508 kernel: hp_sw: device handler registered
Nov 29 00:35:50 np0005539508 kernel: emc: device handler registered
Nov 29 00:35:50 np0005539508 kernel: alua: device handler registered
Nov 29 00:35:50 np0005539508 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 00:35:50 np0005539508 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 00:35:50 np0005539508 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 00:35:50 np0005539508 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 00:35:50 np0005539508 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 00:35:50 np0005539508 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 00:35:50 np0005539508 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 00:35:50 np0005539508 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 00:35:50 np0005539508 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 00:35:50 np0005539508 kernel: hub 1-0:1.0: USB hub found
Nov 29 00:35:50 np0005539508 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 00:35:50 np0005539508 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 00:35:50 np0005539508 kernel: usbserial: USB Serial support registered for generic
Nov 29 00:35:50 np0005539508 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 00:35:50 np0005539508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 00:35:50 np0005539508 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 00:35:50 np0005539508 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 00:35:50 np0005539508 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 00:35:50 np0005539508 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 00:35:50 np0005539508 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 00:35:50 np0005539508 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T05:35:49 UTC (1764394549)
Nov 29 00:35:50 np0005539508 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 00:35:50 np0005539508 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 00:35:50 np0005539508 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 00:35:50 np0005539508 kernel: usbcore: registered new interface driver usbhid
Nov 29 00:35:50 np0005539508 kernel: usbhid: USB HID core driver
Nov 29 00:35:50 np0005539508 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 00:35:50 np0005539508 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 00:35:50 np0005539508 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 00:35:50 np0005539508 kernel: Initializing XFRM netlink socket
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_INET6 protocol family
Nov 29 00:35:50 np0005539508 kernel: Segment Routing with IPv6
Nov 29 00:35:50 np0005539508 kernel: NET: Registered PF_PACKET protocol family
Nov 29 00:35:50 np0005539508 kernel: mpls_gso: MPLS GSO support
Nov 29 00:35:50 np0005539508 kernel: IPI shorthand broadcast: enabled
Nov 29 00:35:50 np0005539508 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 00:35:50 np0005539508 kernel: AES CTR mode by8 optimization enabled
Nov 29 00:35:50 np0005539508 kernel: sched_clock: Marking stable (1275011990, 133424950)->(1594138089, -185701149)
Nov 29 00:35:50 np0005539508 kernel: registered taskstats version 1
Nov 29 00:35:50 np0005539508 kernel: Loading compiled-in X.509 certificates
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 00:35:50 np0005539508 kernel: Demotion targets for Node 0: null
Nov 29 00:35:50 np0005539508 kernel: page_owner is disabled
Nov 29 00:35:50 np0005539508 kernel: Key type .fscrypt registered
Nov 29 00:35:50 np0005539508 kernel: Key type fscrypt-provisioning registered
Nov 29 00:35:50 np0005539508 kernel: Key type big_key registered
Nov 29 00:35:50 np0005539508 kernel: Key type encrypted registered
Nov 29 00:35:50 np0005539508 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 00:35:50 np0005539508 kernel: Loading compiled-in module X.509 certificates
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 00:35:50 np0005539508 kernel: ima: Allocated hash algorithm: sha256
Nov 29 00:35:50 np0005539508 kernel: ima: No architecture policies found
Nov 29 00:35:50 np0005539508 kernel: evm: Initialising EVM extended attributes:
Nov 29 00:35:50 np0005539508 kernel: evm: security.selinux
Nov 29 00:35:50 np0005539508 kernel: evm: security.SMACK64 (disabled)
Nov 29 00:35:50 np0005539508 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 00:35:50 np0005539508 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 00:35:50 np0005539508 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 00:35:50 np0005539508 kernel: evm: security.apparmor (disabled)
Nov 29 00:35:50 np0005539508 kernel: evm: security.ima
Nov 29 00:35:50 np0005539508 kernel: evm: security.capability
Nov 29 00:35:50 np0005539508 kernel: evm: HMAC attrs: 0x1
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 00:35:50 np0005539508 kernel: Running certificate verification RSA selftest
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 00:35:50 np0005539508 kernel: Running certificate verification ECDSA selftest
Nov 29 00:35:50 np0005539508 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 00:35:50 np0005539508 kernel: clk: Disabling unused clocks
Nov 29 00:35:50 np0005539508 kernel: Freeing unused decrypted memory: 2028K
Nov 29 00:35:50 np0005539508 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 00:35:50 np0005539508 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 00:35:50 np0005539508 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 00:35:50 np0005539508 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 00:35:50 np0005539508 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 00:35:50 np0005539508 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 00:35:50 np0005539508 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 00:35:50 np0005539508 kernel: Run /init as init process
Nov 29 00:35:50 np0005539508 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 00:35:50 np0005539508 systemd: Detected virtualization kvm.
Nov 29 00:35:50 np0005539508 systemd: Detected architecture x86-64.
Nov 29 00:35:50 np0005539508 systemd: Running in initrd.
Nov 29 00:35:50 np0005539508 systemd: No hostname configured, using default hostname.
Nov 29 00:35:50 np0005539508 systemd: Hostname set to <localhost>.
Nov 29 00:35:50 np0005539508 systemd: Initializing machine ID from VM UUID.
Nov 29 00:35:50 np0005539508 systemd: Queued start job for default target Initrd Default Target.
Nov 29 00:35:50 np0005539508 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 00:35:50 np0005539508 systemd: Reached target Local Encrypted Volumes.
Nov 29 00:35:50 np0005539508 systemd: Reached target Initrd /usr File System.
Nov 29 00:35:50 np0005539508 systemd: Reached target Local File Systems.
Nov 29 00:35:50 np0005539508 systemd: Reached target Path Units.
Nov 29 00:35:50 np0005539508 systemd: Reached target Slice Units.
Nov 29 00:35:50 np0005539508 systemd: Reached target Swaps.
Nov 29 00:35:50 np0005539508 systemd: Reached target Timer Units.
Nov 29 00:35:50 np0005539508 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 00:35:50 np0005539508 systemd: Listening on Journal Socket (/dev/log).
Nov 29 00:35:50 np0005539508 systemd: Listening on Journal Socket.
Nov 29 00:35:50 np0005539508 systemd: Listening on udev Control Socket.
Nov 29 00:35:50 np0005539508 systemd: Listening on udev Kernel Socket.
Nov 29 00:35:50 np0005539508 systemd: Reached target Socket Units.
Nov 29 00:35:50 np0005539508 systemd: Starting Create List of Static Device Nodes...
Nov 29 00:35:50 np0005539508 systemd: Starting Journal Service...
Nov 29 00:35:50 np0005539508 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 00:35:50 np0005539508 systemd: Starting Apply Kernel Variables...
Nov 29 00:35:50 np0005539508 systemd: Starting Create System Users...
Nov 29 00:35:50 np0005539508 systemd: Starting Setup Virtual Console...
Nov 29 00:35:50 np0005539508 systemd: Finished Create List of Static Device Nodes.
Nov 29 00:35:50 np0005539508 systemd: Finished Apply Kernel Variables.
Nov 29 00:35:50 np0005539508 systemd: Finished Create System Users.
Nov 29 00:35:50 np0005539508 systemd-journald[305]: Journal started
Nov 29 00:35:50 np0005539508 systemd-journald[305]: Runtime Journal (/run/log/journal/c87c7517e5694e428023b11f25bc4e0c) is 8.0M, max 153.6M, 145.6M free.
Nov 29 00:35:50 np0005539508 systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 29 00:35:50 np0005539508 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 29 00:35:50 np0005539508 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 00:35:50 np0005539508 systemd: Started Journal Service.
Nov 29 00:35:50 np0005539508 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 00:35:50 np0005539508 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 00:35:50 np0005539508 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 00:35:50 np0005539508 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 00:35:50 np0005539508 systemd[1]: Finished Setup Virtual Console.
Nov 29 00:35:50 np0005539508 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 00:35:50 np0005539508 systemd[1]: Starting dracut cmdline hook...
Nov 29 00:35:50 np0005539508 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 00:35:50 np0005539508 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 00:35:50 np0005539508 systemd[1]: Finished dracut cmdline hook.
Nov 29 00:35:50 np0005539508 systemd[1]: Starting dracut pre-udev hook...
Nov 29 00:35:50 np0005539508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 00:35:50 np0005539508 kernel: device-mapper: uevent: version 1.0.3
Nov 29 00:35:50 np0005539508 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 00:35:50 np0005539508 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 00:35:50 np0005539508 kernel: RPC: Registered udp transport module.
Nov 29 00:35:50 np0005539508 kernel: RPC: Registered tcp transport module.
Nov 29 00:35:50 np0005539508 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 00:35:50 np0005539508 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 00:35:50 np0005539508 rpc.statd[444]: Version 2.5.4 starting
Nov 29 00:35:50 np0005539508 rpc.statd[444]: Initializing NSM state
Nov 29 00:35:50 np0005539508 rpc.idmapd[449]: Setting log level to 0
Nov 29 00:35:50 np0005539508 systemd[1]: Finished dracut pre-udev hook.
Nov 29 00:35:50 np0005539508 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 00:35:50 np0005539508 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 00:35:50 np0005539508 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 00:35:51 np0005539508 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 00:35:51 np0005539508 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 00:35:51 np0005539508 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 00:35:51 np0005539508 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 00:35:51 np0005539508 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 00:35:51 np0005539508 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 00:35:51 np0005539508 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 00:35:51 np0005539508 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 00:35:51 np0005539508 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Network.
Nov 29 00:35:51 np0005539508 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 00:35:51 np0005539508 systemd[1]: Starting dracut initqueue hook...
Nov 29 00:35:51 np0005539508 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 00:35:51 np0005539508 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 00:35:51 np0005539508 kernel: vda: vda1
Nov 29 00:35:51 np0005539508 systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:35:51 np0005539508 kernel: scsi host0: ata_piix
Nov 29 00:35:51 np0005539508 kernel: scsi host1: ata_piix
Nov 29 00:35:51 np0005539508 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 00:35:51 np0005539508 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 00:35:51 np0005539508 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Initrd Root Device.
Nov 29 00:35:51 np0005539508 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 00:35:51 np0005539508 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target System Initialization.
Nov 29 00:35:51 np0005539508 kernel: ata1: found unknown device (class 0)
Nov 29 00:35:51 np0005539508 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Basic System.
Nov 29 00:35:51 np0005539508 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 00:35:51 np0005539508 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 00:35:51 np0005539508 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 00:35:51 np0005539508 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 00:35:51 np0005539508 systemd[1]: Finished dracut initqueue hook.
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 00:35:51 np0005539508 systemd[1]: Reached target Remote File Systems.
Nov 29 00:35:51 np0005539508 systemd[1]: Starting dracut pre-mount hook...
Nov 29 00:35:51 np0005539508 systemd[1]: Finished dracut pre-mount hook.
Nov 29 00:35:51 np0005539508 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 00:35:51 np0005539508 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 00:35:51 np0005539508 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 00:35:51 np0005539508 systemd[1]: Mounting /sysroot...
Nov 29 00:35:52 np0005539508 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 00:35:52 np0005539508 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 00:35:52 np0005539508 kernel: XFS (vda1): Ending clean mount
Nov 29 00:35:52 np0005539508 systemd[1]: Mounted /sysroot.
Nov 29 00:35:52 np0005539508 systemd[1]: Reached target Initrd Root File System.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 00:35:52 np0005539508 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 00:35:52 np0005539508 systemd[1]: Reached target Initrd File Systems.
Nov 29 00:35:52 np0005539508 systemd[1]: Reached target Initrd Default Target.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting dracut mount hook...
Nov 29 00:35:52 np0005539508 systemd[1]: Finished dracut mount hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 00:35:52 np0005539508 rpc.idmapd[449]: exiting on signal 15
Nov 29 00:35:52 np0005539508 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Network.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Timer Units.
Nov 29 00:35:52 np0005539508 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Initrd Default Target.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Basic System.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Initrd Root Device.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Path Units.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Remote File Systems.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Slice Units.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Socket Units.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target System Initialization.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Local File Systems.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Swaps.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut mount hook.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut initqueue hook.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Setup Virtual Console.
Nov 29 00:35:52 np0005539508 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Closed udev Control Socket.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Closed udev Kernel Socket.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 00:35:52 np0005539508 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped dracut cmdline hook.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting Cleanup udev Database...
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 00:35:52 np0005539508 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 00:35:52 np0005539508 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Stopped Create System Users.
Nov 29 00:35:52 np0005539508 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 00:35:52 np0005539508 systemd[1]: Finished Cleanup udev Database.
Nov 29 00:35:52 np0005539508 systemd[1]: Reached target Switch Root.
Nov 29 00:35:52 np0005539508 systemd[1]: Starting Switch Root...
Nov 29 00:35:52 np0005539508 systemd[1]: Switching root.
Nov 29 00:35:52 np0005539508 systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Nov 29 00:35:52 np0005539508 systemd-journald[305]: Journal stopped
Nov 29 00:35:53 np0005539508 kernel: audit: type=1404 audit(1764394552.630:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:35:53 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:35:53 np0005539508 kernel: audit: type=1403 audit(1764394552.777:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 00:35:53 np0005539508 systemd: Successfully loaded SELinux policy in 153.934ms.
Nov 29 00:35:53 np0005539508 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.124ms.
Nov 29 00:35:53 np0005539508 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 00:35:53 np0005539508 systemd: Detected virtualization kvm.
Nov 29 00:35:53 np0005539508 systemd: Detected architecture x86-64.
Nov 29 00:35:53 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:35:53 np0005539508 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd: Stopped Switch Root.
Nov 29 00:35:53 np0005539508 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 00:35:53 np0005539508 systemd: Created slice Slice /system/getty.
Nov 29 00:35:53 np0005539508 systemd: Created slice Slice /system/serial-getty.
Nov 29 00:35:53 np0005539508 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 00:35:53 np0005539508 systemd: Created slice User and Session Slice.
Nov 29 00:35:53 np0005539508 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 00:35:53 np0005539508 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 00:35:53 np0005539508 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 00:35:53 np0005539508 systemd: Reached target Local Encrypted Volumes.
Nov 29 00:35:53 np0005539508 systemd: Stopped target Switch Root.
Nov 29 00:35:53 np0005539508 systemd: Stopped target Initrd File Systems.
Nov 29 00:35:53 np0005539508 systemd: Stopped target Initrd Root File System.
Nov 29 00:35:53 np0005539508 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 00:35:53 np0005539508 systemd: Reached target Path Units.
Nov 29 00:35:53 np0005539508 systemd: Reached target rpc_pipefs.target.
Nov 29 00:35:53 np0005539508 systemd: Reached target Slice Units.
Nov 29 00:35:53 np0005539508 systemd: Reached target Swaps.
Nov 29 00:35:53 np0005539508 systemd: Reached target Local Verity Protected Volumes.
Nov 29 00:35:53 np0005539508 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 00:35:53 np0005539508 systemd: Reached target RPC Port Mapper.
Nov 29 00:35:53 np0005539508 systemd: Listening on Process Core Dump Socket.
Nov 29 00:35:53 np0005539508 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 00:35:53 np0005539508 systemd: Listening on udev Control Socket.
Nov 29 00:35:53 np0005539508 systemd: Listening on udev Kernel Socket.
Nov 29 00:35:53 np0005539508 systemd: Mounting Huge Pages File System...
Nov 29 00:35:53 np0005539508 systemd: Mounting POSIX Message Queue File System...
Nov 29 00:35:53 np0005539508 systemd: Mounting Kernel Debug File System...
Nov 29 00:35:53 np0005539508 systemd: Mounting Kernel Trace File System...
Nov 29 00:35:53 np0005539508 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 00:35:53 np0005539508 systemd: Starting Create List of Static Device Nodes...
Nov 29 00:35:53 np0005539508 systemd: Starting Load Kernel Module configfs...
Nov 29 00:35:53 np0005539508 systemd: Starting Load Kernel Module drm...
Nov 29 00:35:53 np0005539508 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 00:35:53 np0005539508 systemd: Starting Load Kernel Module fuse...
Nov 29 00:35:53 np0005539508 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 00:35:53 np0005539508 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd: Stopped File System Check on Root Device.
Nov 29 00:35:53 np0005539508 systemd: Stopped Journal Service.
Nov 29 00:35:53 np0005539508 systemd: Starting Journal Service...
Nov 29 00:35:53 np0005539508 kernel: fuse: init (API version 7.37)
Nov 29 00:35:53 np0005539508 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 00:35:53 np0005539508 systemd: Starting Generate network units from Kernel command line...
Nov 29 00:35:53 np0005539508 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 00:35:53 np0005539508 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 00:35:53 np0005539508 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 00:35:53 np0005539508 systemd: Starting Apply Kernel Variables...
Nov 29 00:35:53 np0005539508 systemd: Starting Coldplug All udev Devices...
Nov 29 00:35:53 np0005539508 systemd-journald[683]: Journal started
Nov 29 00:35:53 np0005539508 systemd-journald[683]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 00:35:53 np0005539508 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 00:35:53 np0005539508 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 00:35:53 np0005539508 systemd: Started Journal Service.
Nov 29 00:35:53 np0005539508 systemd[1]: Mounted Huge Pages File System.
Nov 29 00:35:53 np0005539508 systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 00:35:53 np0005539508 systemd[1]: Mounted Kernel Debug File System.
Nov 29 00:35:53 np0005539508 systemd[1]: Mounted Kernel Trace File System.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 00:35:53 np0005539508 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 00:35:53 np0005539508 kernel: ACPI: bus type drm_connector registered
Nov 29 00:35:53 np0005539508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 00:35:53 np0005539508 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Load Kernel Module drm.
Nov 29 00:35:53 np0005539508 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Apply Kernel Variables.
Nov 29 00:35:53 np0005539508 systemd[1]: Mounting FUSE Control File System...
Nov 29 00:35:53 np0005539508 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 00:35:53 np0005539508 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Create System Users...
Nov 29 00:35:53 np0005539508 systemd[1]: Mounted FUSE Control File System.
Nov 29 00:35:53 np0005539508 systemd-journald[683]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 00:35:53 np0005539508 systemd-journald[683]: Received client request to flush runtime journal.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 00:35:53 np0005539508 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Create System Users.
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 00:35:53 np0005539508 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 00:35:53 np0005539508 systemd[1]: Reached target Local File Systems.
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 00:35:53 np0005539508 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 00:35:53 np0005539508 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 00:35:53 np0005539508 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 00:35:53 np0005539508 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 00:35:53 np0005539508 bootctl[700]: Couldn't find EFI system partition, skipping.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Security Auditing Service...
Nov 29 00:35:53 np0005539508 systemd[1]: Starting RPC Bind...
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 00:35:53 np0005539508 auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 00:35:53 np0005539508 auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 00:35:53 np0005539508 systemd[1]: Started RPC Bind.
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 00:35:53 np0005539508 augenrules[712]: /sbin/augenrules: No change
Nov 29 00:35:53 np0005539508 augenrules[727]: No rules
Nov 29 00:35:53 np0005539508 augenrules[727]: enabled 1
Nov 29 00:35:53 np0005539508 augenrules[727]: failure 1
Nov 29 00:35:53 np0005539508 augenrules[727]: pid 707
Nov 29 00:35:53 np0005539508 augenrules[727]: rate_limit 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_limit 8192
Nov 29 00:35:53 np0005539508 augenrules[727]: lost 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time 60000
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time_actual 0
Nov 29 00:35:53 np0005539508 augenrules[727]: enabled 1
Nov 29 00:35:53 np0005539508 augenrules[727]: failure 1
Nov 29 00:35:53 np0005539508 augenrules[727]: pid 707
Nov 29 00:35:53 np0005539508 augenrules[727]: rate_limit 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_limit 8192
Nov 29 00:35:53 np0005539508 augenrules[727]: lost 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time 60000
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time_actual 0
Nov 29 00:35:53 np0005539508 augenrules[727]: enabled 1
Nov 29 00:35:53 np0005539508 augenrules[727]: failure 1
Nov 29 00:35:53 np0005539508 augenrules[727]: pid 707
Nov 29 00:35:53 np0005539508 augenrules[727]: rate_limit 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_limit 8192
Nov 29 00:35:53 np0005539508 augenrules[727]: lost 0
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog 1
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time 60000
Nov 29 00:35:53 np0005539508 augenrules[727]: backlog_wait_time_actual 0
Nov 29 00:35:53 np0005539508 systemd[1]: Started Security Auditing Service.
Nov 29 00:35:53 np0005539508 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 00:35:53 np0005539508 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 00:35:54 np0005539508 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 00:35:54 np0005539508 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 00:35:54 np0005539508 systemd[1]: Starting Update is Completed...
Nov 29 00:35:54 np0005539508 systemd[1]: Finished Update is Completed.
Nov 29 00:35:54 np0005539508 systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 00:35:54 np0005539508 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target System Initialization.
Nov 29 00:35:54 np0005539508 systemd[1]: Started dnf makecache --timer.
Nov 29 00:35:54 np0005539508 systemd[1]: Started Daily rotation of log files.
Nov 29 00:35:54 np0005539508 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target Timer Units.
Nov 29 00:35:54 np0005539508 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 00:35:54 np0005539508 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target Socket Units.
Nov 29 00:35:54 np0005539508 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 00:35:54 np0005539508 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 00:35:54 np0005539508 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 00:35:54 np0005539508 systemd-udevd[752]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:35:54 np0005539508 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 00:35:54 np0005539508 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 00:35:54 np0005539508 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 00:35:54 np0005539508 systemd[1]: Started D-Bus System Message Bus.
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target Basic System.
Nov 29 00:35:54 np0005539508 dbus-broker-lau[771]: Ready
Nov 29 00:35:54 np0005539508 systemd[1]: Starting NTP client/server...
Nov 29 00:35:54 np0005539508 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 00:35:54 np0005539508 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 00:35:54 np0005539508 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 00:35:54 np0005539508 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 00:35:54 np0005539508 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 00:35:54 np0005539508 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 00:35:54 np0005539508 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 00:35:54 np0005539508 systemd[1]: Started irqbalance daemon.
Nov 29 00:35:54 np0005539508 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 00:35:54 np0005539508 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:35:54 np0005539508 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:35:54 np0005539508 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target sshd-keygen.target.
Nov 29 00:35:54 np0005539508 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 00:35:54 np0005539508 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 00:35:55 np0005539508 chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 00:35:55 np0005539508 chronyd[800]: Loaded 0 symmetric keys
Nov 29 00:35:55 np0005539508 chronyd[800]: Using right/UTC timezone to obtain leap second data
Nov 29 00:35:55 np0005539508 chronyd[800]: Loaded seccomp filter (level 2)
Nov 29 00:35:55 np0005539508 systemd[1]: Starting User Login Management...
Nov 29 00:35:55 np0005539508 systemd[1]: Started NTP client/server.
Nov 29 00:35:55 np0005539508 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 00:35:55 np0005539508 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 00:35:55 np0005539508 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 00:35:55 np0005539508 kernel: kvm_amd: TSC scaling supported
Nov 29 00:35:55 np0005539508 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 00:35:55 np0005539508 kernel: kvm_amd: Nested Paging enabled
Nov 29 00:35:55 np0005539508 kernel: kvm_amd: LBR virtualization supported
Nov 29 00:35:55 np0005539508 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 00:35:55 np0005539508 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 00:35:55 np0005539508 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 00:35:55 np0005539508 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 00:35:55 np0005539508 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Nov 29 00:35:55 np0005539508 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 00:35:55 np0005539508 systemd-logind[797]: New seat seat0.
Nov 29 00:35:55 np0005539508 kernel: Console: switching to colour dummy device 80x25
Nov 29 00:35:55 np0005539508 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 00:35:55 np0005539508 kernel: [drm] features: -context_init
Nov 29 00:35:55 np0005539508 kernel: [drm] number of scanouts: 1
Nov 29 00:35:55 np0005539508 kernel: [drm] number of cap sets: 0
Nov 29 00:35:55 np0005539508 systemd[1]: Started User Login Management.
Nov 29 00:35:55 np0005539508 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 00:35:55 np0005539508 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 00:35:55 np0005539508 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 00:35:55 np0005539508 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 00:35:55 np0005539508 cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 05:35:55 +0000. Up 7.19 seconds.
Nov 29 00:35:55 np0005539508 systemd[1]: run-cloud\x2dinit-tmp-tmpea_o52zy.mount: Deactivated successfully.
Nov 29 00:35:55 np0005539508 systemd[1]: Starting Hostname Service...
Nov 29 00:35:55 np0005539508 systemd[1]: Started Hostname Service.
Nov 29 00:35:55 np0005539508 systemd-hostnamed[857]: Hostname set to <np0005539508.novalocal> (static)
Nov 29 00:35:55 np0005539508 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 00:35:55 np0005539508 systemd[1]: Reached target Preparation for Network.
Nov 29 00:35:55 np0005539508 systemd[1]: Starting Network Manager...
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0116] NetworkManager (version 1.54.1-1.el9) is starting... (boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0122] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0196] manager[0x55da37017080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0232] hostname: hostname: using hostnamed
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0232] hostname: static hostname changed from (none) to "np0005539508.novalocal"
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0237] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0387] manager[0x55da37017080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0389] manager[0x55da37017080]: rfkill: WWAN hardware radio set enabled
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0464] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0465] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0466] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 00:35:56 np0005539508 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0467] manager: Networking is enabled by state file
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0475] settings: Loaded settings plugin: keyfile (internal)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0503] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0548] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0573] dhcp: init: Using DHCP client 'internal'
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0583] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0608] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0619] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0631] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0649] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0653] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0693] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0699] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0703] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0707] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0710] device (eth0): carrier: link connected
Nov 29 00:35:56 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0717] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0728] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0739] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 00:35:56 np0005539508 systemd[1]: Started Network Manager.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0747] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0749] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0755] manager: NetworkManager state is now CONNECTING
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0758] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 systemd[1]: Reached target Network.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0771] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:35:56 np0005539508 systemd[1]: Starting Network Manager Wait Online...
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0828] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0838] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 00:35:56 np0005539508 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0865] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0888] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0890] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0899] device (lo): Activation: successful, device activated.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0929] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0934] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0939] device (eth0): Activation: successful, device activated.
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0945] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 00:35:56 np0005539508 NetworkManager[861]: <info>  [1764394556.0950] manager: startup complete
Nov 29 00:35:56 np0005539508 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 00:35:56 np0005539508 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 00:35:56 np0005539508 systemd[1]: Reached target NFS client services.
Nov 29 00:35:56 np0005539508 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 00:35:56 np0005539508 systemd[1]: Reached target Remote File Systems.
Nov 29 00:35:56 np0005539508 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 00:35:56 np0005539508 systemd[1]: Finished Network Manager Wait Online.
Nov 29 00:35:56 np0005539508 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 00:35:56 np0005539508 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 05:35:56 +0000. Up 8.14 seconds.
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.22         | 255.255.255.0 | global | fa:16:3e:f2:9a:ed |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fef2:9aed/64 |       .       |  link  | fa:16:3e:f2:9a:ed |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 00:35:56 np0005539508 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 00:35:57 np0005539508 cloud-init[924]: Generating public/private rsa key pair.
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key fingerprint is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: SHA256:3CWYiCW/jSPEeU8I+Mvc3QgpD62OznLpLjReeStf2yo root@np0005539508.novalocal
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key's randomart image is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: +---[RSA 3072]----+
Nov 29 00:35:57 np0005539508 cloud-init[924]: |   .o .          |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |  .. B o o       |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |   .=.=.+ . .    |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |   .+.+B . o     |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |   ooB+oSo.      |
Nov 29 00:35:57 np0005539508 cloud-init[924]: | o o=oo.o .      |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |o o+. ..         |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |oo+..E. o        |
Nov 29 00:35:57 np0005539508 cloud-init[924]: | B= o..o..       |
Nov 29 00:35:57 np0005539508 cloud-init[924]: +----[SHA256]-----+
Nov 29 00:35:57 np0005539508 cloud-init[924]: Generating public/private ecdsa key pair.
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key fingerprint is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: SHA256:GO5/9HL6MYPP3c7Vj+Fz7AZaE4q18Kj85ROe60WDJGI root@np0005539508.novalocal
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key's randomart image is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: +---[ECDSA 256]---+
Nov 29 00:35:57 np0005539508 cloud-init[924]: |                 |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |                 |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |      . E . .    |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |     . + ..o...  |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |      o S  *.oo. |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |     .    +.=.+..|
Nov 29 00:35:57 np0005539508 cloud-init[924]: |      .. o.o==o+o|
Nov 29 00:35:57 np0005539508 cloud-init[924]: |       .o o=BB.=*|
Nov 29 00:35:57 np0005539508 cloud-init[924]: |        .oo*Bo+**|
Nov 29 00:35:57 np0005539508 cloud-init[924]: +----[SHA256]-----+
Nov 29 00:35:57 np0005539508 cloud-init[924]: Generating public/private ed25519 key pair.
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 00:35:57 np0005539508 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key fingerprint is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: SHA256:4QuCVs1s8IYegrr0KSA4lnjz6YlWnvDPNcJwzZxpPfM root@np0005539508.novalocal
Nov 29 00:35:57 np0005539508 cloud-init[924]: The key's randomart image is:
Nov 29 00:35:57 np0005539508 cloud-init[924]: +--[ED25519 256]--+
Nov 29 00:35:57 np0005539508 cloud-init[924]: |    .            |
Nov 29 00:35:57 np0005539508 cloud-init[924]: | .   B           |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |. . + B .        |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |+ .= + = =       |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |B+= + o S +      |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |==ooo* o . +     |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |.. Bo.o +   E    |
Nov 29 00:35:57 np0005539508 cloud-init[924]: |  oo+o o .       |
Nov 29 00:35:57 np0005539508 cloud-init[924]: | .. o.o          |
Nov 29 00:35:57 np0005539508 cloud-init[924]: +----[SHA256]-----+
Nov 29 00:35:57 np0005539508 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 00:35:57 np0005539508 systemd[1]: Reached target Cloud-config availability.
Nov 29 00:35:57 np0005539508 systemd[1]: Reached target Network is Online.
Nov 29 00:35:57 np0005539508 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 00:35:57 np0005539508 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 00:35:57 np0005539508 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 00:35:57 np0005539508 systemd[1]: Starting System Logging Service...
Nov 29 00:35:57 np0005539508 sm-notify[1006]: Version 2.5.4 starting
Nov 29 00:35:57 np0005539508 systemd[1]: Starting OpenSSH server daemon...
Nov 29 00:35:57 np0005539508 systemd[1]: Starting Permit User Sessions...
Nov 29 00:35:57 np0005539508 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 00:35:57 np0005539508 systemd[1]: Started OpenSSH server daemon.
Nov 29 00:35:57 np0005539508 systemd[1]: Finished Permit User Sessions.
Nov 29 00:35:57 np0005539508 systemd[1]: Started Command Scheduler.
Nov 29 00:35:57 np0005539508 systemd[1]: Started Getty on tty1.
Nov 29 00:35:57 np0005539508 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 00:35:57 np0005539508 systemd[1]: Reached target Login Prompts.
Nov 29 00:35:57 np0005539508 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Nov 29 00:35:57 np0005539508 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 00:35:57 np0005539508 systemd[1]: Started System Logging Service.
Nov 29 00:35:57 np0005539508 systemd[1]: Reached target Multi-User System.
Nov 29 00:35:57 np0005539508 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 00:35:57 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 00:35:57 np0005539508 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 00:35:57 np0005539508 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 00:35:58 np0005539508 kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Nov 29 00:35:58 np0005539508 kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 00:35:58 np0005539508 cloud-init[1114]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 05:35:58 +0000. Up 9.86 seconds.
Nov 29 00:35:58 np0005539508 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 00:35:58 np0005539508 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 00:35:58 np0005539508 dracut[1285]: dracut-057-102.git20250818.el9
Nov 29 00:35:58 np0005539508 cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 05:35:58 +0000. Up 10.28 seconds.
Nov 29 00:35:58 np0005539508 cloud-init[1305]: #############################################################
Nov 29 00:35:58 np0005539508 cloud-init[1306]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 00:35:58 np0005539508 cloud-init[1308]: 256 SHA256:GO5/9HL6MYPP3c7Vj+Fz7AZaE4q18Kj85ROe60WDJGI root@np0005539508.novalocal (ECDSA)
Nov 29 00:35:58 np0005539508 cloud-init[1312]: 256 SHA256:4QuCVs1s8IYegrr0KSA4lnjz6YlWnvDPNcJwzZxpPfM root@np0005539508.novalocal (ED25519)
Nov 29 00:35:58 np0005539508 cloud-init[1317]: 3072 SHA256:3CWYiCW/jSPEeU8I+Mvc3QgpD62OznLpLjReeStf2yo root@np0005539508.novalocal (RSA)
Nov 29 00:35:58 np0005539508 cloud-init[1322]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 00:35:58 np0005539508 cloud-init[1323]: #############################################################
Nov 29 00:35:58 np0005539508 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 00:35:58 np0005539508 cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 05:35:58 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.47 seconds
Nov 29 00:35:58 np0005539508 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 00:35:58 np0005539508 systemd[1]: Reached target Cloud-init target.
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 00:35:59 np0005539508 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: memstrack is not available
Nov 29 00:36:00 np0005539508 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 00:36:00 np0005539508 dracut[1287]: memstrack is not available
Nov 29 00:36:00 np0005539508 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 00:36:00 np0005539508 dracut[1287]: *** Including module: systemd ***
Nov 29 00:36:00 np0005539508 dracut[1287]: *** Including module: fips ***
Nov 29 00:36:01 np0005539508 dracut[1287]: *** Including module: systemd-initrd ***
Nov 29 00:36:01 np0005539508 dracut[1287]: *** Including module: i18n ***
Nov 29 00:36:01 np0005539508 chronyd[800]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Nov 29 00:36:02 np0005539508 chronyd[800]: System clock wrong by 1.496457 seconds
Nov 29 00:36:02 np0005539508 chronyd[800]: System clock was stepped by 1.496457 seconds
Nov 29 00:36:02 np0005539508 chronyd[800]: System clock TAI offset set to 37 seconds
Nov 29 00:36:02 np0005539508 dracut[1287]: *** Including module: drm ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: prefixdevname ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: kernel-modules ***
Nov 29 00:36:03 np0005539508 kernel: block vda: the capability attribute has been deprecated.
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: kernel-modules-extra ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: qemu ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: fstab-sys ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: rootfs-block ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: terminfo ***
Nov 29 00:36:03 np0005539508 dracut[1287]: *** Including module: udev-rules ***
Nov 29 00:36:04 np0005539508 dracut[1287]: Skipping udev rule: 91-permissions.rules
Nov 29 00:36:04 np0005539508 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: virtiofs ***
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: dracut-systemd ***
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: usrmount ***
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: base ***
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: fs-lib ***
Nov 29 00:36:04 np0005539508 dracut[1287]: *** Including module: kdumpbase ***
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 00:36:05 np0005539508 dracut[1287]:  microcode_ctl module: mangling fw_dir
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 00:36:05 np0005539508 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Including module: openssl ***
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Including module: shutdown ***
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Including module: squash ***
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Including modules done ***
Nov 29 00:36:05 np0005539508 dracut[1287]: *** Installing kernel module dependencies ***
Nov 29 00:36:06 np0005539508 dracut[1287]: *** Installing kernel module dependencies done ***
Nov 29 00:36:06 np0005539508 dracut[1287]: *** Resolving executable dependencies ***
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 35 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 33 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 31 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 28 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 34 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 32 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 30 affinity is now unmanaged
Nov 29 00:36:06 np0005539508 irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 00:36:06 np0005539508 irqbalance[789]: IRQ 29 affinity is now unmanaged
Nov 29 00:36:07 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:36:08 np0005539508 dracut[1287]: *** Resolving executable dependencies done ***
Nov 29 00:36:08 np0005539508 dracut[1287]: *** Generating early-microcode cpio image ***
Nov 29 00:36:08 np0005539508 dracut[1287]: *** Store current command line parameters ***
Nov 29 00:36:08 np0005539508 dracut[1287]: Stored kernel commandline:
Nov 29 00:36:08 np0005539508 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Nov 29 00:36:08 np0005539508 dracut[1287]: *** Install squash loader ***
Nov 29 00:36:09 np0005539508 dracut[1287]: *** Squashing the files inside the initramfs ***
Nov 29 00:36:10 np0005539508 dracut[1287]: *** Squashing the files inside the initramfs done ***
Nov 29 00:36:10 np0005539508 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 00:36:10 np0005539508 dracut[1287]: *** Hardlinking files ***
Nov 29 00:36:10 np0005539508 dracut[1287]: *** Hardlinking files done ***
Nov 29 00:36:10 np0005539508 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 00:36:11 np0005539508 kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Nov 29 00:36:11 np0005539508 kdumpctl[1016]: kdump: Starting kdump: [OK]
Nov 29 00:36:11 np0005539508 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 00:36:11 np0005539508 systemd[1]: Startup finished in 1.781s (kernel) + 2.583s (initrd) + 17.578s (userspace) = 21.943s.
Nov 29 00:36:27 np0005539508 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 00:37:40 np0005539508 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 00:37:40 np0005539508 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 00:37:40 np0005539508 systemd-logind[797]: New session 1 of user zuul.
Nov 29 00:37:40 np0005539508 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 00:37:40 np0005539508 systemd[1]: Starting User Manager for UID 1000...
Nov 29 00:37:40 np0005539508 systemd[4304]: Queued start job for default target Main User Target.
Nov 29 00:37:40 np0005539508 systemd[4304]: Created slice User Application Slice.
Nov 29 00:37:40 np0005539508 systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 00:37:40 np0005539508 systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 00:37:40 np0005539508 systemd[4304]: Reached target Paths.
Nov 29 00:37:40 np0005539508 systemd[4304]: Reached target Timers.
Nov 29 00:37:40 np0005539508 systemd[4304]: Starting D-Bus User Message Bus Socket...
Nov 29 00:37:40 np0005539508 systemd[4304]: Starting Create User's Volatile Files and Directories...
Nov 29 00:37:40 np0005539508 systemd[4304]: Finished Create User's Volatile Files and Directories.
Nov 29 00:37:40 np0005539508 systemd[4304]: Listening on D-Bus User Message Bus Socket.
Nov 29 00:37:40 np0005539508 systemd[4304]: Reached target Sockets.
Nov 29 00:37:40 np0005539508 systemd[4304]: Reached target Basic System.
Nov 29 00:37:40 np0005539508 systemd[4304]: Reached target Main User Target.
Nov 29 00:37:40 np0005539508 systemd[4304]: Startup finished in 100ms.
Nov 29 00:37:40 np0005539508 systemd[1]: Started User Manager for UID 1000.
Nov 29 00:37:40 np0005539508 systemd[1]: Started Session 1 of User zuul.
Nov 29 00:37:41 np0005539508 python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:37:44 np0005539508 python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:37:52 np0005539508 python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:37:53 np0005539508 python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 00:37:55 np0005539508 python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrxzXgpPmVv8+7+5w1Oy1RsXOPeqdxTcUlq37d0RcYulAAKXWla/qJwAX46v5xh/Mg4GnRpk77lvDWcVnOQjFYQg3OeLmFgDDNPV0YL7URmIe/MvgcqM+Kx7/SQjk+hEt7rUIqkFUjeREX60T5eTEMANFgJrljqZcBTMgYr67x4v7oFELzKuZIO0SCAprJ9NYmdRaC+DsjZjU+DuFdHBnfZCpgkTFMCda2FAS9BneAVOIMCBu5RgNVJXeAgIsPX9GNX3qDJMKOluQLOW++2gbue3S1Nrs1GMPm+IPRD4yWc9eZs1tpR1jdP1BEPBpyQRQlUn4z7BUdEogSzYiXCSmqzN1o/R3mdi16bG8e2lHve5MQFABPko8KsgVOJu0H7b7wGo/oGdXH7sdlKuGoWxWyTFcq3RcVkaVgjKtt6zeswkrpxMUv9/6NXPrhIWqdQm/wVw0Pv2p98yq10QRPyBv5yI8zcNjxueUl3aM8SZML87E6lhkUFFdAuVof+Sl5Pz8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:37:55 np0005539508 python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:37:56 np0005539508 python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:37:56 np0005539508 python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394676.1403496-251-129804431246993/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=601e897125784122ba5d7472ada57b1d_id_rsa follow=False checksum=5ac8bea8bfb8f348688bf24843ddb1285b2d351d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:37:57 np0005539508 python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:37:57 np0005539508 python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394677.150716-306-42429902461958/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=601e897125784122ba5d7472ada57b1d_id_rsa.pub follow=False checksum=48b31d706687f3385690285b8caeaea67ea8286c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:37:59 np0005539508 python3[4974]: ansible-ping Invoked with data=pong
Nov 29 00:38:00 np0005539508 python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:38:02 np0005539508 python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 00:38:03 np0005539508 python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:03 np0005539508 python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:03 np0005539508 python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:05 np0005539508 python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:05 np0005539508 python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:05 np0005539508 python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:07 np0005539508 python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:08 np0005539508 python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:38:08 np0005539508 python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394687.5734053-31-214443851879255/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:09 np0005539508 python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:09 np0005539508 python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:09 np0005539508 python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:10 np0005539508 python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:10 np0005539508 python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:10 np0005539508 python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:10 np0005539508 python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:11 np0005539508 python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:11 np0005539508 python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:11 np0005539508 python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:12 np0005539508 python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:12 np0005539508 python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:12 np0005539508 python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:12 np0005539508 python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:13 np0005539508 python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:13 np0005539508 python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:13 np0005539508 python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:14 np0005539508 python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:14 np0005539508 python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:14 np0005539508 python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:14 np0005539508 python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:15 np0005539508 python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:15 np0005539508 python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:15 np0005539508 python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:15 np0005539508 python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:16 np0005539508 python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:38:19 np0005539508 python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 00:38:19 np0005539508 systemd[1]: Starting Time & Date Service...
Nov 29 00:38:19 np0005539508 systemd[1]: Started Time & Date Service.
Nov 29 00:38:19 np0005539508 systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Nov 29 00:38:19 np0005539508 python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:20 np0005539508 python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:38:20 np0005539508 python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764394700.0193212-251-171256774323141/source _original_basename=tmpmtniz78x follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:21 np0005539508 python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:38:21 np0005539508 python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764394701.1279824-301-78693874424833/source _original_basename=tmpkukt5feb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:22 np0005539508 python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:38:23 np0005539508 python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764394702.4326034-381-247193518684047/source _original_basename=tmpbh_psin_ follow=False checksum=0a5264336eaf669ce906803fabc64043ef3757da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:23 np0005539508 python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:38:23 np0005539508 python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:38:24 np0005539508 python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:38:25 np0005539508 python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394704.3634124-451-108775467523355/source _original_basename=tmpotslm687 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:25 np0005539508 python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-3d5b-5bb0-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:38:26 np0005539508 python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-3d5b-5bb0-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 00:38:27 np0005539508 python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:46 np0005539508 python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:38:49 np0005539508 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 00:39:27 np0005539508 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 00:39:27 np0005539508 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5718] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 00:39:27 np0005539508 systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5922] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5961] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5968] device (eth1): carrier: link connected
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5971] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5983] policy: auto-activating connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5990] device (eth1): Activation: starting connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5991] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.5995] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.6000] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:39:27 np0005539508 NetworkManager[861]: <info>  [1764394767.6010] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:39:28 np0005539508 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-4e5a-44df-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:39:38 np0005539508 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:39:38 np0005539508 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394777.9972265-104-249937094941339/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=238071955a4d7097a928b7c267e7f2bab5a0e0d2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:39:39 np0005539508 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 00:39:39 np0005539508 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 00:39:39 np0005539508 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 00:39:39 np0005539508 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 00:39:39 np0005539508 systemd[1]: Stopping Network Manager...
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6582] caught SIGTERM, shutting down normally.
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): canceled DHCP transaction
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): state changed no lease
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6598] manager: NetworkManager state is now CONNECTING
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6659] dhcp4 (eth1): canceled DHCP transaction
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6660] dhcp4 (eth1): state changed no lease
Nov 29 00:39:39 np0005539508 NetworkManager[861]: <info>  [1764394779.6758] exiting (success)
Nov 29 00:39:39 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:39:39 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:39:39 np0005539508 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 00:39:39 np0005539508 systemd[1]: Stopped Network Manager.
Nov 29 00:39:39 np0005539508 systemd[1]: NetworkManager.service: Consumed 1.628s CPU time, 9.9M memory peak.
Nov 29 00:39:39 np0005539508 systemd[1]: Starting Network Manager...
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.7444] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.7449] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.7512] manager[0x55f814f93070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 00:39:39 np0005539508 systemd[1]: Starting Hostname Service...
Nov 29 00:39:39 np0005539508 systemd[1]: Started Hostname Service.
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8665] hostname: hostname: using hostnamed
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8667] hostname: static hostname changed from (none) to "np0005539508.novalocal"
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8680] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8688] manager[0x55f814f93070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8690] manager[0x55f814f93070]: rfkill: WWAN hardware radio set enabled
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8739] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8740] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8741] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8741] manager: Networking is enabled by state file
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8745] settings: Loaded settings plugin: keyfile (internal)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8752] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8793] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8807] dhcp: init: Using DHCP client 'internal'
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8812] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8822] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8830] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8845] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8858] device (eth0): carrier: link connected
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8866] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8875] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8877] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8890] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8907] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8921] device (eth1): carrier: link connected
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8930] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8943] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a) (indicated)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8945] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8957] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8971] device (eth1): Activation: starting connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 00:39:39 np0005539508 systemd[1]: Started Network Manager.
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.8984] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9018] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9025] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9029] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9033] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9037] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9042] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9045] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9049] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9059] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9063] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9075] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9079] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9102] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9109] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9119] device (lo): Activation: successful, device activated.
Nov 29 00:39:39 np0005539508 systemd[1]: Starting Network Manager Wait Online...
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9137] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9148] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9241] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9279] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9281] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9285] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9289] device (eth0): Activation: successful, device activated.
Nov 29 00:39:39 np0005539508 NetworkManager[7189]: <info>  [1764394779.9295] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 00:39:40 np0005539508 python3[7264]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-4e5a-44df-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:39:50 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:40:09 np0005539508 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7602] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 00:40:24 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:40:24 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7899] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7902] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7912] device (eth1): Activation: successful, device activated.
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7921] manager: startup complete
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7923] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <warn>  [1764394824.7931] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.7942] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 systemd[1]: Finished Network Manager Wait Online.
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8059] dhcp4 (eth1): canceled DHCP transaction
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8060] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8060] dhcp4 (eth1): state changed no lease
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8082] policy: auto-activating connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8089] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8094] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8104] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8116] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8161] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8164] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 00:40:24 np0005539508 NetworkManager[7189]: <info>  [1764394824.8174] device (eth1): Activation: successful, device activated.
Nov 29 00:40:28 np0005539508 systemd[4304]: Starting Mark boot as successful...
Nov 29 00:40:28 np0005539508 systemd[4304]: Finished Mark boot as successful.
Nov 29 00:40:34 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:40:40 np0005539508 systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Nov 29 00:41:43 np0005539508 systemd-logind[797]: New session 3 of user zuul.
Nov 29 00:41:43 np0005539508 systemd[1]: Started Session 3 of User zuul.
Nov 29 00:41:43 np0005539508 python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:41:44 np0005539508 python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394903.6208549-373-276086504366831/source _original_basename=tmpaonanc0i follow=False checksum=95c43167cb69fbe3f3b9eff0c3ecf63c2bbd5b70 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:41:48 np0005539508 systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Nov 29 00:41:48 np0005539508 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 00:41:48 np0005539508 systemd-logind[797]: Removed session 3.
Nov 29 00:43:28 np0005539508 systemd[4304]: Created slice User Background Tasks Slice.
Nov 29 00:43:28 np0005539508 systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 00:43:28 np0005539508 systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 00:47:03 np0005539508 systemd-logind[797]: New session 4 of user zuul.
Nov 29 00:47:03 np0005539508 systemd[1]: Started Session 4 of User zuul.
Nov 29 00:47:03 np0005539508 python3[7512]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-b110-1686-000000000ca2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:04 np0005539508 python3[7541]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:04 np0005539508 python3[7567]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:04 np0005539508 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:05 np0005539508 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:05 np0005539508 python3[7645]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:06 np0005539508 python3[7723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:47:06 np0005539508 python3[7796]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764395225.8410692-365-183849364866585/source _original_basename=tmpvi_grj7t follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:47:07 np0005539508 python3[7846]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 00:47:07 np0005539508 systemd[1]: Reloading.
Nov 29 00:47:07 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:47:09 np0005539508 python3[7901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 00:47:10 np0005539508 python3[7927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:10 np0005539508 python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:10 np0005539508 python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:10 np0005539508 python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:11 np0005539508 python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-b110-1686-000000000ca9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:47:12 np0005539508 python3[8068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 00:47:15 np0005539508 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 00:47:15 np0005539508 systemd[1]: session-4.scope: Consumed 4.478s CPU time.
Nov 29 00:47:15 np0005539508 systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Nov 29 00:47:15 np0005539508 systemd-logind[797]: Removed session 4.
Nov 29 00:47:16 np0005539508 systemd-logind[797]: New session 5 of user zuul.
Nov 29 00:47:16 np0005539508 systemd[1]: Started Session 5 of User zuul.
Nov 29 00:47:17 np0005539508 python3[8102]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 00:47:32 np0005539508 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:47:32 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:47:41 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:47:50 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:47:51 np0005539508 setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 00:47:51 np0005539508 setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 00:48:02 np0005539508 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 00:48:02 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 00:48:20 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 00:48:20 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 00:48:20 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 00:48:20 np0005539508 systemd[1]: Reloading.
Nov 29 00:48:20 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 00:48:21 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 00:48:24 np0005539508 python3[11614]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-4d52-d96a-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:48:25 np0005539508 kernel: evm: overlay not supported
Nov 29 00:48:25 np0005539508 systemd[4304]: Starting D-Bus User Message Bus...
Nov 29 00:48:25 np0005539508 dbus-broker-launch[12487]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 00:48:25 np0005539508 dbus-broker-launch[12487]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 00:48:25 np0005539508 systemd[4304]: Started D-Bus User Message Bus.
Nov 29 00:48:25 np0005539508 dbus-broker-lau[12487]: Ready
Nov 29 00:48:25 np0005539508 systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 00:48:25 np0005539508 systemd[4304]: Created slice Slice /user.
Nov 29 00:48:25 np0005539508 systemd[4304]: podman-12317.scope: unit configures an IP firewall, but not running as root.
Nov 29 00:48:25 np0005539508 systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 00:48:25 np0005539508 systemd[4304]: Started podman-12317.scope.
Nov 29 00:48:25 np0005539508 systemd[4304]: Started podman-pause-6214d594.scope.
Nov 29 00:48:26 np0005539508 python3[13066]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.97:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.97:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:48:26 np0005539508 python3[13066]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 00:48:26 np0005539508 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 00:48:26 np0005539508 systemd[1]: session-5.scope: Consumed 59.583s CPU time.
Nov 29 00:48:26 np0005539508 systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Nov 29 00:48:26 np0005539508 systemd-logind[797]: Removed session 5.
Nov 29 00:48:52 np0005539508 systemd-logind[797]: New session 6 of user zuul.
Nov 29 00:48:52 np0005539508 systemd[1]: Started Session 6 of User zuul.
Nov 29 00:48:52 np0005539508 python3[22458]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:48:53 np0005539508 python3[22664]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:48:54 np0005539508 python3[22998]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539508.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 00:48:54 np0005539508 python3[23183]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 00:48:55 np0005539508 python3[23450]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:48:55 np0005539508 python3[23733]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764395335.0861526-167-277432247256758/source _original_basename=tmpb29m9yaq follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:48:56 np0005539508 python3[24050]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 00:48:56 np0005539508 systemd[1]: Starting Hostname Service...
Nov 29 00:48:56 np0005539508 systemd[1]: Started Hostname Service.
Nov 29 00:48:56 np0005539508 systemd-hostnamed[24126]: Changed pretty hostname to 'compute-0'
Nov 29 00:48:56 np0005539508 systemd-hostnamed[24126]: Hostname set to <compute-0> (static)
Nov 29 00:48:56 np0005539508 NetworkManager[7189]: <info>  [1764395336.9762] hostname: static hostname changed from "np0005539508.novalocal" to "compute-0"
Nov 29 00:48:56 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 00:48:57 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 00:48:57 np0005539508 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 00:48:57 np0005539508 systemd[1]: session-6.scope: Consumed 2.620s CPU time.
Nov 29 00:48:57 np0005539508 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Nov 29 00:48:57 np0005539508 systemd-logind[797]: Removed session 6.
Nov 29 00:49:07 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 00:49:15 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 00:49:15 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 00:49:15 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 1min 5.819s CPU time.
Nov 29 00:49:15 np0005539508 systemd[1]: run-r6d4e92f2203343d8b7a3b79be9bea0c0.service: Deactivated successfully.
Nov 29 00:49:27 np0005539508 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 00:51:18 np0005539508 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 00:51:18 np0005539508 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 00:51:18 np0005539508 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 00:51:18 np0005539508 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 00:53:01 np0005539508 systemd-logind[797]: New session 7 of user zuul.
Nov 29 00:53:01 np0005539508 systemd[1]: Started Session 7 of User zuul.
Nov 29 00:53:02 np0005539508 python3[30004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 00:53:04 np0005539508 python3[30120]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:04 np0005539508 python3[30193]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:05 np0005539508 python3[30219]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:05 np0005539508 python3[30292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:05 np0005539508 python3[30318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:06 np0005539508 python3[30391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:06 np0005539508 python3[30417]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:07 np0005539508 python3[30490]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:07 np0005539508 python3[30516]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:07 np0005539508 python3[30589]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:07 np0005539508 python3[30615]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:08 np0005539508 python3[30688]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:08 np0005539508 python3[30714]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 00:53:09 np0005539508 python3[30787]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 00:53:20 np0005539508 python3[30845]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 00:58:20 np0005539508 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 00:58:20 np0005539508 systemd[1]: session-7.scope: Consumed 5.894s CPU time.
Nov 29 00:58:20 np0005539508 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Nov 29 00:58:20 np0005539508 systemd-logind[797]: Removed session 7.
Nov 29 01:06:27 np0005539508 systemd-logind[797]: New session 8 of user zuul.
Nov 29 01:06:27 np0005539508 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:06:28 np0005539508 python3.9[31174]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:06:29 np0005539508 python3.9[31355]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:06:38 np0005539508 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:06:38 np0005539508 systemd[1]: session-8.scope: Consumed 7.747s CPU time.
Nov 29 01:06:38 np0005539508 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:06:38 np0005539508 systemd-logind[797]: Removed session 8.
Nov 29 01:06:54 np0005539508 systemd-logind[797]: New session 9 of user zuul.
Nov 29 01:06:54 np0005539508 systemd[1]: Started Session 9 of User zuul.
Nov 29 01:06:55 np0005539508 python3.9[31576]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 01:06:56 np0005539508 python3.9[31753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:06:57 np0005539508 python3.9[31905]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:06:59 np0005539508 python3.9[32058]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:07:00 np0005539508 python3.9[32211]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:07:01 np0005539508 python3.9[32363]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:07:02 np0005539508 python3.9[32486]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396420.8603268-182-115163283736205/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:07:03 np0005539508 python3.9[32640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:07:04 np0005539508 python3.9[32796]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:07:05 np0005539508 python3.9[32948]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:07:06 np0005539508 python3.9[33098]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:07:10 np0005539508 python3.9[33353]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:07:10 np0005539508 python3.9[33503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:07:12 np0005539508 python3.9[33657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:07:13 np0005539508 python3.9[33815]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:07:14 np0005539508 python3.9[33899]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:07:57 np0005539508 systemd[1]: Reloading.
Nov 29 01:07:57 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:07:58 np0005539508 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 01:07:58 np0005539508 systemd[1]: Reloading.
Nov 29 01:07:58 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:07:58 np0005539508 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 01:07:58 np0005539508 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 01:07:58 np0005539508 systemd[1]: Reloading.
Nov 29 01:07:58 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:07:59 np0005539508 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 01:07:59 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:07:59 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:07:59 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:08:59 np0005539508 systemd[1]: Starting dnf makecache...
Nov 29 01:09:00 np0005539508 dnf[34386]: Failed determining last makecache time.
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-barbican-42b4c41831408a8e323 111 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 175 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-cinder-1c00d6490d88e436f26ef 186 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-stevedore-c4acc5639fd2329372142 175 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-cloudkitty-tests-tempest-2c80f8 172 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-os-net-config-9758ab42364673d01bc5014e 149 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 192 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-designate-tests-tempest-347fdbc 200 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-glance-1fd12c29b339f30fe823e 197 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 196 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-manila-3c01b7181572c95dac462 192 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-whitebox-neutron-tests-tempest- 197 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-octavia-ba397f07a7331190208c 192 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-watcher-c014f81a8647287f6dcc 171 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-tcib-1124124ec06aadbac34f0d340b 189 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 184 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-swift-dc98a8463506ac520c469a 185 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-python-tempestconf-8515371b7cceebd4282 164 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: delorean-openstack-heat-ui-013accbfd179753bc3f0 199 kB/s | 3.0 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: CentOS Stream 9 - AppStream                      33 kB/s | 7.4 kB     00:00
Nov 29 01:09:00 np0005539508 dnf[34386]: CentOS Stream 9 - CRB                            70 kB/s | 7.2 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: dlrn-antelope-testing                           104 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: dlrn-antelope-build-deps                        120 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: centos9-rabbitmq                                 88 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: centos9-storage                                  40 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: centos9-opstools                                 25 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: NFV SIG OpenvSwitch                             112 kB/s | 3.0 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: repo-setup-centos-appstream                     150 kB/s | 4.4 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: repo-setup-centos-baseos                        163 kB/s | 3.9 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: repo-setup-centos-highavailability               77 kB/s | 3.9 kB     00:00
Nov 29 01:09:01 np0005539508 dnf[34386]: repo-setup-centos-powertools                    104 kB/s | 4.3 kB     00:00
Nov 29 01:09:02 np0005539508 dnf[34386]: Extra Packages for Enterprise Linux 9 - x86_64  107 kB/s |  33 kB     00:00
Nov 29 01:09:02 np0005539508 dnf[34386]: Metadata cache created.
Nov 29 01:09:02 np0005539508 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:09:02 np0005539508 systemd[1]: Finished dnf makecache.
Nov 29 01:09:02 np0005539508 systemd[1]: dnf-makecache.service: Consumed 1.814s CPU time.
Nov 29 01:09:10 np0005539508 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:09:10 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:09:11 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 01:09:11 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:09:11 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:09:11 np0005539508 systemd[1]: Reloading.
Nov 29 01:09:11 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:09:11 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:09:12 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:09:12 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:09:12 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 1.349s CPU time.
Nov 29 01:09:12 np0005539508 systemd[1]: run-r8c6adfed0c3f46b9b28c6b687f452354.service: Deactivated successfully.
Nov 29 01:09:18 np0005539508 python3.9[35498]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:09:21 np0005539508 python3.9[35779]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 01:09:22 np0005539508 python3.9[35933]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 01:09:26 np0005539508 python3.9[36088]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:09:27 np0005539508 python3.9[36240]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 01:09:32 np0005539508 python3.9[36392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:09:32 np0005539508 python3.9[36546]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:09:36 np0005539508 python3.9[36669]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396572.401859-671-253087301284542/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:09:38 np0005539508 python3.9[36823]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:09:39 np0005539508 python3.9[36975]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:09:40 np0005539508 python3.9[37128]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:09:41 np0005539508 python3.9[37280]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 01:09:41 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:09:43 np0005539508 python3.9[37434]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:09:44 np0005539508 python3.9[37592]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 01:09:45 np0005539508 python3.9[37752]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 01:09:46 np0005539508 python3.9[37906]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:09:47 np0005539508 python3.9[38064]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 01:09:48 np0005539508 python3.9[38217]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:09:51 np0005539508 python3.9[38370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:09:52 np0005539508 python3.9[38522]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:09:52 np0005539508 python3.9[38645]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396591.4921277-1028-266701471914340/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:09:54 np0005539508 python3.9[38799]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:09:54 np0005539508 systemd[1]: Starting Load Kernel Modules...
Nov 29 01:09:54 np0005539508 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 01:09:54 np0005539508 kernel: Bridge firewalling registered
Nov 29 01:09:54 np0005539508 systemd-modules-load[38803]: Inserted module 'br_netfilter'
Nov 29 01:09:54 np0005539508 systemd[1]: Finished Load Kernel Modules.
Nov 29 01:09:55 np0005539508 python3.9[38959]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:09:55 np0005539508 python3.9[39082]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396594.6039958-1097-228041779108748/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:09:56 np0005539508 python3.9[39234]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:10:00 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:10:00 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:10:01 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:10:01 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:10:01 np0005539508 systemd[1]: Reloading.
Nov 29 01:10:01 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:10:01 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:10:04 np0005539508 python3.9[41331]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:10:05 np0005539508 python3.9[42282]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 01:10:06 np0005539508 python3.9[43103]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:10:06 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:10:06 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:10:06 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 5.836s CPU time.
Nov 29 01:10:06 np0005539508 systemd[1]: run-rcc745b10e61e4ca18fd82697e7a2feff.service: Deactivated successfully.
Nov 29 01:10:07 np0005539508 python3.9[43465]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:10:07 np0005539508 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 01:10:08 np0005539508 systemd[1]: Starting Authorization Manager...
Nov 29 01:10:08 np0005539508 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 01:10:08 np0005539508 polkitd[43682]: Started polkitd version 0.117
Nov 29 01:10:08 np0005539508 systemd[1]: Started Authorization Manager.
Nov 29 01:10:09 np0005539508 python3.9[43853]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:10:09 np0005539508 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 01:10:09 np0005539508 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 01:10:09 np0005539508 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 01:10:09 np0005539508 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 01:10:09 np0005539508 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 01:10:10 np0005539508 python3.9[44016]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 01:10:14 np0005539508 python3.9[44168]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:10:14 np0005539508 systemd[1]: Reloading.
Nov 29 01:10:14 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:10:15 np0005539508 python3.9[44357]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:10:15 np0005539508 systemd[1]: Reloading.
Nov 29 01:10:16 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:10:17 np0005539508 python3.9[44546]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:10:17 np0005539508 python3.9[44699]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:10:17 np0005539508 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 01:10:18 np0005539508 python3.9[44852]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:10:21 np0005539508 python3.9[45019]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:10:22 np0005539508 python3.9[45173]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:10:22 np0005539508 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:10:22 np0005539508 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:10:22 np0005539508 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 01:10:22 np0005539508 systemd[1]: Starting Apply Kernel Variables...
Nov 29 01:10:22 np0005539508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:10:22 np0005539508 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:10:22 np0005539508 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 01:10:22 np0005539508 systemd[1]: session-9.scope: Consumed 2min 18.731s CPU time.
Nov 29 01:10:22 np0005539508 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Nov 29 01:10:22 np0005539508 systemd-logind[797]: Removed session 9.
Nov 29 01:10:28 np0005539508 systemd-logind[797]: New session 10 of user zuul.
Nov 29 01:10:28 np0005539508 systemd[1]: Started Session 10 of User zuul.
Nov 29 01:10:30 np0005539508 python3.9[45356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:10:31 np0005539508 python3.9[45513]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 01:10:32 np0005539508 python3.9[45666]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:10:34 np0005539508 python3.9[45825]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 01:10:35 np0005539508 python3.9[45985]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:10:36 np0005539508 python3.9[46071]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 01:10:40 np0005539508 python3.9[46237]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:10:52 np0005539508 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:10:52 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:10:52 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 01:10:52 np0005539508 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 01:10:54 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:10:54 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:10:54 np0005539508 systemd[1]: Reloading.
Nov 29 01:10:54 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:10:54 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:10:54 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:10:55 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:10:55 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:10:55 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 1.041s CPU time.
Nov 29 01:10:55 np0005539508 systemd[1]: run-r78dac5bb0fa74677999dca655113ca93.service: Deactivated successfully.
Nov 29 01:10:59 np0005539508 python3.9[47342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:10:59 np0005539508 systemd[1]: Reloading.
Nov 29 01:10:59 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:10:59 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:10:59 np0005539508 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 01:10:59 np0005539508 chown[47384]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 01:10:59 np0005539508 ovs-ctl[47389]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 01:10:59 np0005539508 ovs-ctl[47389]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 01:10:59 np0005539508 ovs-ctl[47389]: Starting ovsdb-server [  OK  ]
Nov 29 01:10:59 np0005539508 ovs-vsctl[47438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 01:10:59 np0005539508 ovs-vsctl[47454]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"93db784b-4e42-404a-b548-49ad165fd917\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 01:11:00 np0005539508 ovs-ctl[47389]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 01:11:00 np0005539508 ovs-vsctl[47463]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 01:11:00 np0005539508 ovs-ctl[47389]: Enabling remote OVSDB managers [  OK  ]
Nov 29 01:11:00 np0005539508 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 01:11:00 np0005539508 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 01:11:00 np0005539508 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 01:11:00 np0005539508 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 01:11:00 np0005539508 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 01:11:00 np0005539508 ovs-ctl[47507]: Inserting openvswitch module [  OK  ]
Nov 29 01:11:00 np0005539508 ovs-ctl[47476]: Starting ovs-vswitchd [  OK  ]
Nov 29 01:11:00 np0005539508 ovs-ctl[47476]: Enabling remote OVSDB managers [  OK  ]
Nov 29 01:11:00 np0005539508 ovs-vsctl[47525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 01:11:00 np0005539508 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 01:11:00 np0005539508 systemd[1]: Starting Open vSwitch...
Nov 29 01:11:00 np0005539508 systemd[1]: Finished Open vSwitch.
Nov 29 01:11:01 np0005539508 python3.9[47676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:11:02 np0005539508 python3.9[47828]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 01:11:03 np0005539508 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:11:03 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:11:05 np0005539508 python3.9[47985]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:11:05 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 01:11:06 np0005539508 python3.9[48143]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:11:08 np0005539508 python3.9[48297]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:11:10 np0005539508 python3.9[48584]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 01:11:11 np0005539508 python3.9[48734]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:11:12 np0005539508 python3.9[48888]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:11:13 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:11:14 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:11:14 np0005539508 systemd[1]: Reloading.
Nov 29 01:11:14 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:11:14 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:11:14 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:11:14 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:11:14 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:11:14 np0005539508 systemd[1]: run-r3c5409f3773c45b9a943bc3a655a1d38.service: Deactivated successfully.
Nov 29 01:11:15 np0005539508 python3.9[49205]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:11:15 np0005539508 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:11:15 np0005539508 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:11:15 np0005539508 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:11:15 np0005539508 systemd[1]: Stopping Network Manager...
Nov 29 01:11:15 np0005539508 NetworkManager[7189]: <info>  [1764396675.8036] caught SIGTERM, shutting down normally.
Nov 29 01:11:15 np0005539508 NetworkManager[7189]: <info>  [1764396675.8061] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:11:15 np0005539508 NetworkManager[7189]: <info>  [1764396675.8062] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:11:15 np0005539508 NetworkManager[7189]: <info>  [1764396675.8062] dhcp4 (eth0): state changed no lease
Nov 29 01:11:15 np0005539508 NetworkManager[7189]: <info>  [1764396675.8068] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:11:15 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:11:15 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:11:16 np0005539508 NetworkManager[7189]: <info>  [1764396676.0096] exiting (success)
Nov 29 01:11:16 np0005539508 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:11:16 np0005539508 systemd[1]: Stopped Network Manager.
Nov 29 01:11:16 np0005539508 systemd[1]: NetworkManager.service: Consumed 13.394s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Nov 29 01:11:16 np0005539508 systemd[1]: Starting Network Manager...
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.0948] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.0949] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.1019] manager[0x55b4cec47090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:11:16 np0005539508 systemd[1]: Starting Hostname Service...
Nov 29 01:11:16 np0005539508 systemd[1]: Started Hostname Service.
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2211] hostname: hostname: using hostnamed
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2212] hostname: static hostname changed from (none) to "compute-0"
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2219] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2229] manager[0x55b4cec47090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2229] manager[0x55b4cec47090]: rfkill: WWAN hardware radio set enabled
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2262] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2277] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2278] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2279] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2280] manager: Networking is enabled by state file
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2283] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2289] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2330] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2342] dhcp: init: Using DHCP client 'internal'
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2347] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2354] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2362] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2374] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2383] device (eth0): carrier: link connected
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2391] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2397] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2398] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2407] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2416] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2424] device (eth1): carrier: link connected
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2431] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2437] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e) (indicated)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2438] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2446] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2457] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 01:11:16 np0005539508 systemd[1]: Started Network Manager.
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2466] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2482] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2485] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2488] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2491] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2495] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2498] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2502] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2508] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2518] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2523] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2535] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2554] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2833] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.2843] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:11:16 np0005539508 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4429] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4444] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4447] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4450] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4460] device (lo): Activation: successful, device activated.
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4472] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4477] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4484] device (eth1): Activation: successful, device activated.
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4538] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4541] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4547] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4554] device (eth0): Activation: successful, device activated.
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4564] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:11:16 np0005539508 NetworkManager[49224]: <info>  [1764396676.4568] manager: startup complete
Nov 29 01:11:16 np0005539508 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:11:17 np0005539508 python3.9[49432]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:11:22 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:11:22 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:11:22 np0005539508 systemd[1]: Reloading.
Nov 29 01:11:22 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:11:22 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:11:22 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:11:23 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:11:23 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:11:23 np0005539508 systemd[1]: run-r52c03e1367404518a3055f445798d2c3.service: Deactivated successfully.
Nov 29 01:11:26 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:11:30 np0005539508 python3.9[49898]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:11:31 np0005539508 python3.9[50050]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:32 np0005539508 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:32 np0005539508 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:33 np0005539508 python3.9[50508]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:34 np0005539508 python3.9[50660]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:35 np0005539508 python3.9[50812]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:11:35 np0005539508 python3.9[50935]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396694.7503397-652-106497898391310/.source _original_basename=.h8c69tcj follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:36 np0005539508 python3.9[51087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:37 np0005539508 python3.9[51239]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 01:11:38 np0005539508 python3.9[51391]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:41 np0005539508 python3.9[51820]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 01:11:42 np0005539508 ansible-async_wrapper.py[51995]: Invoked with j211298114970 300 /home/zuul/.ansible/tmp/ansible-tmp-1764396701.7859013-850-70372614563925/AnsiballZ_edpm_os_net_config.py _
Nov 29 01:11:42 np0005539508 ansible-async_wrapper.py[51998]: Starting module and watcher
Nov 29 01:11:42 np0005539508 ansible-async_wrapper.py[51998]: Start watching 51999 (300)
Nov 29 01:11:42 np0005539508 ansible-async_wrapper.py[51999]: Start module (51999)
Nov 29 01:11:42 np0005539508 ansible-async_wrapper.py[51995]: Return async_wrapper task started.
Nov 29 01:11:42 np0005539508 python3.9[52000]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 01:11:43 np0005539508 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 01:11:43 np0005539508 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 01:11:43 np0005539508 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 01:11:43 np0005539508 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 01:11:43 np0005539508 kernel: cfg80211: failed to load regulatory.db
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.0838] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.0861] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1507] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1509] audit: op="connection-add" uuid="eeac5863-66ee-4b3f-bf7f-c02d23c041db" name="br-ex-br" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1526] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1527] audit: op="connection-add" uuid="27fc12c3-9aac-4dc3-8080-14921a438ebd" name="br-ex-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1544] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1546] audit: op="connection-add" uuid="fe1178fc-2e29-4419-8399-354dc28e3b2c" name="eth1-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1561] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1562] audit: op="connection-add" uuid="6012ad6e-71c5-48a9-9c01-3870d9361158" name="vlan20-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1576] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1577] audit: op="connection-add" uuid="808ffa2a-001b-45da-ae40-c67f83a923a5" name="vlan21-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1592] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1593] audit: op="connection-add" uuid="952dc223-7473-4a72-a39e-de9d203b944f" name="vlan22-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1605] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1607] audit: op="connection-add" uuid="58b0e596-4e75-4486-bab7-ad59ffc2a5e8" name="vlan23-port" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1627] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1646] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.1648] audit: op="connection-add" uuid="10b840ee-e2b5-4908-8c0e-b2ae3a1e1dbf" name="br-ex-if" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4187] audit: op="connection-update" uuid="b3ca7565-e6c0-5ba2-a076-c2cd58810e8e" name="ci-private-network" args="ovs-external-ids.data,ipv6.dns,ipv6.method,ipv6.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules,connection.slave-type,connection.port-type,connection.controller,connection.master,connection.timestamp,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ovs-interface.type" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4220] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4222] audit: op="connection-add" uuid="fd9d2d91-4934-4b8f-a318-cbe602a2ac38" name="vlan20-if" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4252] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4254] audit: op="connection-add" uuid="1cd3f48f-8572-46f4-8849-79769c7469fe" name="vlan21-if" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4284] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4286] audit: op="connection-add" uuid="b38ee61b-bc8a-4e5f-a666-ec49f7e18104" name="vlan22-if" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4313] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4316] audit: op="connection-add" uuid="7139b4eb-3cbc-4ea8-8191-e076d0c1b71d" name="vlan23-if" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4335] audit: op="connection-delete" uuid="ca3faf74-3a1e-393e-b2c9-9f72990abe6a" name="Wired connection 1" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4356] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4372] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4377] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (eeac5863-66ee-4b3f-bf7f-c02d23c041db)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4378] audit: op="connection-activate" uuid="eeac5863-66ee-4b3f-bf7f-c02d23c041db" name="br-ex-br" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4382] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4394] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4409] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (27fc12c3-9aac-4dc3-8080-14921a438ebd)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4412] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4424] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4432] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (fe1178fc-2e29-4419-8399-354dc28e3b2c)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4435] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4446] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4453] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6012ad6e-71c5-48a9-9c01-3870d9361158)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4455] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4466] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4472] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (808ffa2a-001b-45da-ae40-c67f83a923a5)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4475] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4485] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4494] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (952dc223-7473-4a72-a39e-de9d203b944f)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4496] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4507] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4513] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (58b0e596-4e75-4486-bab7-ad59ffc2a5e8)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4515] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4519] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4522] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4532] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4539] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4545] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (10b840ee-e2b5-4908-8c0e-b2ae3a1e1dbf)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4546] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4551] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4554] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4555] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4557] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4574] device (eth1): disconnecting for new activation request.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4575] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4580] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4584] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4586] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4591] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4599] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4606] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fd9d2d91-4934-4b8f-a318-cbe602a2ac38)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4607] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4612] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4615] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4617] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4622] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4629] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4636] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (1cd3f48f-8572-46f4-8849-79769c7469fe)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4637] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4643] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4647] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4650] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4656] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4665] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4674] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b38ee61b-bc8a-4e5f-a666-ec49f7e18104)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4675] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4680] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4683] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4685] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4689] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4695] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4701] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7139b4eb-3cbc-4ea8-8191-e076d0c1b71d)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4703] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4707] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4712] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4714] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4717] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4744] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4747] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4752] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4755] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4767] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4774] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4779] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4783] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4786] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4794] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4801] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4806] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4808] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4816] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4824] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4829] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4832] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4841] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4848] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4852] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4853] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4859] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): state changed no lease
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4865] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.4877] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52001 uid=0 result="fail" reason="Device is not activated"
Nov 29 01:11:45 np0005539508 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:11:45 np0005539508 kernel: ovs-system: entered promiscuous mode
Nov 29 01:11:45 np0005539508 systemd-udevd[52005]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:11:45 np0005539508 kernel: Timeout policy base is empty
Nov 29 01:11:45 np0005539508 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5735] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5741] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5752] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 kernel: br-ex: entered promiscuous mode
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5801] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 01:11:45 np0005539508 kernel: vlan20: entered promiscuous mode
Nov 29 01:11:45 np0005539508 systemd-udevd[52007]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:11:45 np0005539508 kernel: vlan21: entered promiscuous mode
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5917] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5927] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5929] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5930] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5931] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5933] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5934] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5935] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5941] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5946] device (eth1): disconnecting for new activation request.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5947] audit: op="connection-activate" uuid="b3ca7565-e6c0-5ba2-a076-c2cd58810e8e" name="ci-private-network" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5951] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5957] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5963] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5969] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5976] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5979] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5983] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5986] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.5990] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6003] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6007] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6010] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 kernel: vlan22: entered promiscuous mode
Nov 29 01:11:45 np0005539508 systemd-udevd[52006]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6020] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6025] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6030] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6038] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6051] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6070] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6079] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6082] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6083] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6089] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6092] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6116] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6121] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6129] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6145] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 kernel: vlan23: entered promiscuous mode
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6154] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6156] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6161] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6181] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6188] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6192] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6206] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6212] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6221] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6228] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6230] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6232] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6237] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6243] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6247] device (eth1): Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6252] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6259] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6263] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6268] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6273] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6277] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6284] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.6298] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.7726] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.7734] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:11:45 np0005539508 NetworkManager[49224]: <info>  [1764396705.7747] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 01:11:46 np0005539508 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:11:46 np0005539508 python3.9[52365]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=status _async_dir=/root/.ansible_async
Nov 29 01:11:47 np0005539508 NetworkManager[49224]: <info>  [1764396707.2615] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 01:11:47 np0005539508 NetworkManager[49224]: <info>  [1764396707.4813] checkpoint[0x55b4cec1d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 01:11:47 np0005539508 NetworkManager[49224]: <info>  [1764396707.4817] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 01:11:47 np0005539508 ansible-async_wrapper.py[51998]: 51999 still running (300)
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.0159] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.0179] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.4323] audit: op="networking-control" arg="global-dns-configuration" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.4386] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.4422] audit: op="networking-control" arg="global-dns-configuration" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.4446] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.7010] checkpoint[0x55b4cec1da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 01:11:48 np0005539508 NetworkManager[49224]: <info>  [1764396708.7018] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 01:11:48 np0005539508 ansible-async_wrapper.py[51999]: Module complete (51999)
Nov 29 01:11:50 np0005539508 python3.9[52473]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=status _async_dir=/root/.ansible_async
Nov 29 01:11:50 np0005539508 python3.9[52574]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 01:11:51 np0005539508 python3.9[52726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:11:52 np0005539508 python3.9[52849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396711.0545173-931-166419454236344/.source.returncode _original_basename=.8r17aj_q follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:52 np0005539508 ansible-async_wrapper.py[51998]: Done in kid B.
Nov 29 01:11:53 np0005539508 python3.9[53002]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:11:53 np0005539508 python3.9[53126]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396712.5663388-979-162472533845544/.source.cfg _original_basename=.9ikxmu67 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:11:54 np0005539508 python3.9[53278]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:11:55 np0005539508 systemd[1]: Reloading Network Manager...
Nov 29 01:11:55 np0005539508 NetworkManager[49224]: <info>  [1764396715.0725] audit: op="reload" arg="0" pid=53282 uid=0 result="success"
Nov 29 01:11:55 np0005539508 NetworkManager[49224]: <info>  [1764396715.0737] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 01:11:55 np0005539508 systemd[1]: Reloaded Network Manager.
Nov 29 01:11:55 np0005539508 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 01:11:55 np0005539508 systemd[1]: session-10.scope: Consumed 54.306s CPU time.
Nov 29 01:11:55 np0005539508 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Nov 29 01:11:55 np0005539508 systemd-logind[797]: Removed session 10.
Nov 29 01:12:00 np0005539508 systemd-logind[797]: New session 11 of user zuul.
Nov 29 01:12:00 np0005539508 systemd[1]: Started Session 11 of User zuul.
Nov 29 01:12:01 np0005539508 python3.9[53470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:12:02 np0005539508 python3.9[53624]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:12:05 np0005539508 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:12:05 np0005539508 python3.9[53818]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:12:06 np0005539508 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 01:12:06 np0005539508 systemd[1]: session-11.scope: Consumed 2.826s CPU time.
Nov 29 01:12:06 np0005539508 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Nov 29 01:12:06 np0005539508 systemd-logind[797]: Removed session 11.
Nov 29 01:12:11 np0005539508 systemd-logind[797]: New session 12 of user zuul.
Nov 29 01:12:11 np0005539508 systemd[1]: Started Session 12 of User zuul.
Nov 29 01:12:13 np0005539508 python3.9[54002]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:12:14 np0005539508 python3.9[54156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:12:15 np0005539508 python3.9[54312]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:12:16 np0005539508 python3.9[54397]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:12:18 np0005539508 python3.9[54550]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:12:21 np0005539508 python3.9[54747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:12:22 np0005539508 python3.9[54899]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:12:22 np0005539508 systemd[1]: var-lib-containers-storage-overlay-compat3680912017-merged.mount: Deactivated successfully.
Nov 29 01:12:22 np0005539508 podman[54900]: 2025-11-29 06:12:22.315718116 +0000 UTC m=+0.169982416 system refresh
Nov 29 01:12:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:12:24 np0005539508 python3.9[55065]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:12:25 np0005539508 python3.9[55188]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396744.1048543-202-156631681591304/.source.json follow=False _original_basename=podman_network_config.j2 checksum=fb1097d0bfd110220a1faf17a72ee335f2fbc0a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:12:26 np0005539508 python3.9[55340]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:12:27 np0005539508 python3.9[55463]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396745.7648883-247-221858944286329/.source.conf follow=False _original_basename=registries.conf.j2 checksum=25aa6c560e50dcbd81b989ea46a7865cb55b8998 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:12:28 np0005539508 python3.9[55615]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:12:28 np0005539508 python3.9[55767]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:12:29 np0005539508 python3.9[55921]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:12:30 np0005539508 python3.9[56073]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:12:31 np0005539508 python3.9[56225]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:12:33 np0005539508 python3.9[56378]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:12:35 np0005539508 python3.9[56532]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:12:35 np0005539508 python3.9[56684]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:12:36 np0005539508 python3.9[56836]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:12:38 np0005539508 python3.9[56989]: ansible-service_facts Invoked
Nov 29 01:12:38 np0005539508 network[57006]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:12:38 np0005539508 network[57007]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:12:38 np0005539508 network[57008]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:12:44 np0005539508 python3.9[57460]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:12:48 np0005539508 python3.9[57613]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 01:12:50 np0005539508 python3.9[57765]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:12:51 np0005539508 python3.9[57890]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396769.7853591-679-72706364777051/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:12:51 np0005539508 python3.9[58044]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:12:52 np0005539508 python3.9[58169]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396771.4156015-724-97859955955477/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:12:54 np0005539508 python3.9[58323]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:12:56 np0005539508 python3.9[58479]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:12:57 np0005539508 python3.9[58563]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:12:59 np0005539508 python3.9[58717]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:12:59 np0005539508 python3.9[58801]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:12:59 np0005539508 chronyd[800]: chronyd exiting
Nov 29 01:12:59 np0005539508 systemd[1]: Stopping NTP client/server...
Nov 29 01:12:59 np0005539508 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 01:12:59 np0005539508 systemd[1]: Stopped NTP client/server.
Nov 29 01:12:59 np0005539508 systemd[1]: Starting NTP client/server...
Nov 29 01:13:00 np0005539508 chronyd[58809]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:13:00 np0005539508 chronyd[58809]: Frequency -28.371 +/- 0.174 ppm read from /var/lib/chrony/drift
Nov 29 01:13:00 np0005539508 chronyd[58809]: Loaded seccomp filter (level 2)
Nov 29 01:13:00 np0005539508 systemd[1]: Started NTP client/server.
Nov 29 01:13:00 np0005539508 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 01:13:00 np0005539508 systemd[1]: session-12.scope: Consumed 29.075s CPU time.
Nov 29 01:13:00 np0005539508 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Nov 29 01:13:00 np0005539508 systemd-logind[797]: Removed session 12.
Nov 29 01:13:06 np0005539508 systemd-logind[797]: New session 13 of user zuul.
Nov 29 01:13:06 np0005539508 systemd[1]: Started Session 13 of User zuul.
Nov 29 01:13:07 np0005539508 python3.9[58992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:08 np0005539508 python3.9[59144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:08 np0005539508 python3.9[59267]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396787.3012104-68-188575026150956/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:09 np0005539508 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 01:13:09 np0005539508 systemd[1]: session-13.scope: Consumed 1.783s CPU time.
Nov 29 01:13:09 np0005539508 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Nov 29 01:13:09 np0005539508 systemd-logind[797]: Removed session 13.
Nov 29 01:13:14 np0005539508 systemd-logind[797]: New session 14 of user zuul.
Nov 29 01:13:14 np0005539508 systemd[1]: Started Session 14 of User zuul.
Nov 29 01:13:15 np0005539508 python3.9[59445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:13:16 np0005539508 python3.9[59601]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:17 np0005539508 python3.9[59776]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:18 np0005539508 python3.9[59899]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764396797.1152694-87-90994236813012/.source.json _original_basename=.12wbzro5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:19 np0005539508 python3.9[60051]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:20 np0005539508 python3.9[60174]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396799.25952-156-10757159972991/.source _original_basename=.0fck9vaq follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:21 np0005539508 python3.9[60328]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:13:22 np0005539508 python3.9[60480]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:23 np0005539508 python3.9[60603]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396801.8954341-228-303409831711/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:13:23 np0005539508 python3.9[60755]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:24 np0005539508 python3.9[60878]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396803.3047166-228-78037428667613/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:13:25 np0005539508 python3.9[61030]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:26 np0005539508 python3.9[61182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:26 np0005539508 python3.9[61305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396805.4252932-339-101541288583640/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:27 np0005539508 python3.9[61457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:28 np0005539508 python3.9[61580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396806.955549-384-258893491244390/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:29 np0005539508 python3.9[61732]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:13:29 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:29 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:29 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:29 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:29 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:30 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:30 np0005539508 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 01:13:30 np0005539508 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 01:13:31 np0005539508 python3.9[61958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:31 np0005539508 python3.9[62081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396810.780518-453-16139603154109/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:32 np0005539508 python3.9[62235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:33 np0005539508 python3.9[62358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396812.2579424-498-56510016278911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:34 np0005539508 python3.9[62510]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:13:34 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:34 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:34 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:34 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:34 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:34 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:34 np0005539508 systemd[1]: Starting Create netns directory...
Nov 29 01:13:34 np0005539508 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 01:13:34 np0005539508 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 01:13:34 np0005539508 systemd[1]: Finished Create netns directory.
Nov 29 01:13:36 np0005539508 python3.9[62735]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:13:36 np0005539508 network[62752]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:13:36 np0005539508 network[62753]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:13:36 np0005539508 network[62754]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:13:41 np0005539508 python3.9[63019]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:13:42 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:42 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:42 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:42 np0005539508 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 01:13:42 np0005539508 iptables.init[63060]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 01:13:42 np0005539508 iptables.init[63060]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 01:13:42 np0005539508 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 01:13:42 np0005539508 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 01:13:43 np0005539508 python3.9[63256]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:13:44 np0005539508 python3.9[63410]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:13:45 np0005539508 systemd[1]: Reloading.
Nov 29 01:13:45 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:13:45 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:13:45 np0005539508 systemd[1]: Starting Netfilter Tables...
Nov 29 01:13:45 np0005539508 systemd[1]: Finished Netfilter Tables.
Nov 29 01:13:46 np0005539508 python3.9[63605]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:13:47 np0005539508 python3.9[63758]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:48 np0005539508 python3.9[63883]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396827.1078682-705-203401036606472/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:49 np0005539508 python3.9[64036]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:13:49 np0005539508 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 01:13:49 np0005539508 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 01:13:50 np0005539508 python3.9[64192]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:51 np0005539508 python3.9[64344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:51 np0005539508 python3.9[64467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396830.5517242-798-165049261770878/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:53 np0005539508 python3.9[64619]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:13:53 np0005539508 systemd[1]: Starting Time & Date Service...
Nov 29 01:13:53 np0005539508 systemd[1]: Started Time & Date Service.
Nov 29 01:13:54 np0005539508 python3.9[64775]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:55 np0005539508 python3.9[64927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:55 np0005539508 python3.9[65050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396834.4880562-903-277422186402380/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:56 np0005539508 python3.9[65202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:57 np0005539508 python3.9[65325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396835.9894624-948-186026062831091/.source.yaml _original_basename=.wxruc_29 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:58 np0005539508 python3.9[65477]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:13:58 np0005539508 python3.9[65602]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396837.575144-993-85194286364716/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:13:59 np0005539508 python3.9[65754]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:00 np0005539508 python3.9[65907]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:01 np0005539508 python3[66060]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 01:14:02 np0005539508 python3.9[66212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:14:03 np0005539508 python3.9[66335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396841.8911374-1110-108776690523190/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:04 np0005539508 python3.9[66487]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:14:04 np0005539508 python3.9[66610]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396843.5071545-1155-183753486107571/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:05 np0005539508 python3.9[66762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:14:06 np0005539508 python3.9[66885]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396845.110764-1200-40766953432673/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:07 np0005539508 python3.9[67037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:14:07 np0005539508 python3.9[67160]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396846.6498346-1245-83437109346091/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:08 np0005539508 python3.9[67312]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:14:09 np0005539508 python3.9[67435]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396848.1760237-1290-124059481833104/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:10 np0005539508 python3.9[67587]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:11 np0005539508 python3.9[67739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:12 np0005539508 python3.9[67898]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:13 np0005539508 python3.9[68051]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:14 np0005539508 python3.9[68205]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:15 np0005539508 python3.9[68357]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 01:14:16 np0005539508 python3.9[68510]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 01:14:16 np0005539508 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 01:14:16 np0005539508 systemd[1]: session-14.scope: Consumed 39.685s CPU time.
Nov 29 01:14:16 np0005539508 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Nov 29 01:14:16 np0005539508 systemd-logind[797]: Removed session 14.
Nov 29 01:14:23 np0005539508 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:14:26 np0005539508 systemd-logind[797]: New session 15 of user zuul.
Nov 29 01:14:26 np0005539508 systemd[1]: Started Session 15 of User zuul.
Nov 29 01:14:27 np0005539508 python3.9[68695]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 01:14:28 np0005539508 python3.9[68847]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:14:29 np0005539508 python3.9[68999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:14:30 np0005539508 python3.9[69151]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX0dhB1m0xL0qEi5jnTQLLB4bvueVV5foNrqU/OkfV/4gRyp7uP2q21lWq5Dtl2GLk51pS6oD41RI41Y5g7OSRs8b1Z66d6X1QgX0Qns6pv7FwmNSQ25+2VGV6lppnaN5e+JHiwTmzpf82hl/MiiJrHo7B63mllKyl9SZJxUhP9RR4czS3QNYQsZyP7sZeCWothTZ2Q/GK4BWBEtj2+ifeOpa342IivopCH05YVQOx9bpsdFHMYaalMDCwvr2lfVns8aTcpJ3z9uE8wLdKWTyiinT7nuLX6RuPwhXB2proBRH1wrGSIUgcVcizkWn8QizD8LlsGFcHIQJkmq+sJz6r7cCZLIfS6hdAzI+hYbJie6n/agwfxe4r+mbXsmmC6ALKKk7CEnaiNnDg0fgTaUfBPwSfu+JmVrjdSO+S8f/CMbtYeO6QknOxhLV9oK6knszv7nLlSYXTzXanHkN4Y0fW3dsSvoE+qDR0YijbbT8slqMd6z95wWVDFUmTcN8Nzk8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILci1PI4hoB56+xxS5gSMKceuJ/dv6t7etpmtENwoSFr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJIaOLr2ntjSUcigXC7a0sFoonsuh0ChCx2a1R6G8EDmJ8/ZB8NEiJE6KAQJDNU5XsXjuaC44eJhOUMRK9r98xA=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GXKCQiCwQEMihcSwDVeJtG2CpTemmA6MTbtOkxbB3OAV5PK8v8imPvDGMDurfGFQG0RzWyv9szlMJXdgIkwejIfy/AY7p6nemHOpu6DdAx0EA/jg1YcOIeeEhyMw1/oFzjYClGMohaI1oTKHtR29UXWphTAroOkf26Exvco6hh2ApRTXV9ObzSoOyCC7+OZcOWgYzdoCfu/0FDGkH2ksKLQS7d4AAh/XZ/njXhK57U7ptxHCReUPECGRv7KB4f8TelZDAIeUyp7ngd/9ivUDO1zue1Qr9ECzTzAFqippGXFmYl3+oSid03CY7bqnxav4xWt7UukbaO57goyIPfkklPdC1kA7kZqa9bqeDU1WgDkqnLu8hluArB0Y0Jz+hDfx9pTbAL6MklraoLaGrnrgcibAollAN+7WGqdWxUotENYaljO7P1Z18MlNllWFzk4Le5jMLNL8qArSlzM+ufOThnLdGEuYZhH1x969AisGQ4MQWn0P0lZFu6fE5VSNA/k=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDdPWx5WoFJTxz6PiFZL5f3XrtE682RjGFiIpoe0LXZO#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQlZMweHfLYiJFtm1r2tQze/oNx6KzgaXkK+Kof7POk0cFMLbTsXU8qgbQMh4o5LVO0Hbas4mAqxRkGcFCg2Po=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUVpPatup3d17omeiTdJaYR8jCcDbraJSPBxWy49Wxst4G+6/lD41HVIKmjgCgIbbmYSFBPQmoXt4gFXP4FRKna6AbQWi0kwF3/T2biQ2qCid0HVDSS8YRVlyrpdVc1/bIg6YNLkGnhzOMp0S1443+cg5PqutAbrAT1LOg6lSBu+K9gIqJ4un3l2guSweoyba5UhMyjrq4Pffx1QCuBggtYSjmA9Q1r5VVNc2J7AbP0QuzOe6J6DhpdGJsfmHDVXZb/4b/aPUdCTKkLseyUtcqElWVhhnGnpYSJdN81ejalSktGHE4JRHih19wwTokiKvoczUgijBzOfl+kt2ELcpDgzpzY0M9yd0Zz7wrK4rLM6hi8x3LYZXZv8N7KnawUcJ2jfzilx1BVLdNzgwDNB7ZlP4O9Vs3fKnBufCUFPNcRyWl6ooczepbgxqgSbr/Ham2O4/qzvJmzLtu0KxBkaFALRWnyM39nYVE/jrMKJ5ihtVDxIY9FGma/Jifg15gqI0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN19pK3a7AH/OiwlqJTVWP/qzU/QzkC16s4D1xY1Vn6J#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsXsjJNPVMX1YVTe2oBmcZpUSiv3HOeuICgZtQun4hTopMXH9dE1jQeUruGwqZ+NsKW6X2bLZZJ0/tcn2owL8Q=#012 create=True mode=0644 path=/tmp/ansible.tvsuidq6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:31 np0005539508 python3.9[69303]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tvsuidq6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:32 np0005539508 python3.9[69457]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tvsuidq6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:33 np0005539508 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 01:14:33 np0005539508 systemd[1]: session-15.scope: Consumed 3.811s CPU time.
Nov 29 01:14:33 np0005539508 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Nov 29 01:14:33 np0005539508 systemd-logind[797]: Removed session 15.
Nov 29 01:14:39 np0005539508 systemd-logind[797]: New session 16 of user zuul.
Nov 29 01:14:39 np0005539508 systemd[1]: Started Session 16 of User zuul.
Nov 29 01:14:40 np0005539508 python3.9[69637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:14:41 np0005539508 python3.9[69793]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 01:14:42 np0005539508 python3.9[69947]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:14:44 np0005539508 python3.9[70100]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:45 np0005539508 python3.9[70253]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:14:46 np0005539508 python3.9[70407]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:14:47 np0005539508 python3.9[70562]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:14:47 np0005539508 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 01:14:47 np0005539508 systemd[1]: session-16.scope: Consumed 4.898s CPU time.
Nov 29 01:14:47 np0005539508 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Nov 29 01:14:47 np0005539508 systemd-logind[797]: Removed session 16.
Nov 29 01:14:53 np0005539508 systemd-logind[797]: New session 17 of user zuul.
Nov 29 01:14:53 np0005539508 systemd[1]: Started Session 17 of User zuul.
Nov 29 01:14:54 np0005539508 python3.9[70742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:14:55 np0005539508 python3.9[70898]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:14:56 np0005539508 python3.9[70982]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 01:14:58 np0005539508 python3.9[71133]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:15:00 np0005539508 python3.9[71284]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:15:00 np0005539508 python3.9[71434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:15:00 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:15:01 np0005539508 python3.9[71585]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:15:02 np0005539508 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 01:15:02 np0005539508 systemd[1]: session-17.scope: Consumed 6.465s CPU time.
Nov 29 01:15:02 np0005539508 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Nov 29 01:15:02 np0005539508 systemd-logind[797]: Removed session 17.
Nov 29 01:15:09 np0005539508 chronyd[58809]: Selected source 162.159.200.123 (pool.ntp.org)
Nov 29 01:15:11 np0005539508 systemd-logind[797]: New session 18 of user zuul.
Nov 29 01:15:11 np0005539508 systemd[1]: Started Session 18 of User zuul.
Nov 29 01:15:18 np0005539508 python3[72355]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:15:20 np0005539508 python3[72450]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:15:21 np0005539508 python3[72477]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:15:22 np0005539508 python3[72503]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:15:22 np0005539508 kernel: loop: module loaded
Nov 29 01:15:22 np0005539508 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 01:15:22 np0005539508 python3[72538]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:15:22 np0005539508 lvm[72541]: PV /dev/loop3 not used.
Nov 29 01:15:22 np0005539508 lvm[72543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:15:23 np0005539508 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 01:15:23 np0005539508 lvm[72545]:  0 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 01:15:23 np0005539508 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 01:15:23 np0005539508 lvm[72553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:15:23 np0005539508 lvm[72553]: VG ceph_vg0 finished
Nov 29 01:15:24 np0005539508 python3[72632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:15:24 np0005539508 python3[72705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396923.9357212-37028-164864907491019/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:15:25 np0005539508 python3[72755]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:15:25 np0005539508 systemd[1]: Reloading.
Nov 29 01:15:25 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:15:25 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:15:26 np0005539508 systemd[1]: Starting Ceph OSD losetup...
Nov 29 01:15:26 np0005539508 bash[72795]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Nov 29 01:15:26 np0005539508 systemd[1]: Finished Ceph OSD losetup.
Nov 29 01:15:26 np0005539508 lvm[72797]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:15:26 np0005539508 lvm[72797]: VG ceph_vg0 finished
Nov 29 01:15:28 np0005539508 python3[72821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:15:30 np0005539508 python3[72916]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:15:33 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:15:33 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:15:34 np0005539508 python3[73026]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:15:34 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:15:34 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:15:34 np0005539508 systemd[1]: run-r0c835895f1bb477fa6c9af610f15c51f.service: Deactivated successfully.
Nov 29 01:15:34 np0005539508 python3[73055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:15:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:15:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:15:35 np0005539508 python3[73118]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:15:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:15:35 np0005539508 python3[73144]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:15:36 np0005539508 python3[73222]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:15:36 np0005539508 python3[73295]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396936.192666-37220-84635208453874/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:15:37 np0005539508 python3[73397]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:15:38 np0005539508 python3[73470]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396937.4645936-37238-279326176779959/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:15:38 np0005539508 python3[73520]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:15:39 np0005539508 python3[73548]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:15:39 np0005539508 python3[73576]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:15:40 np0005539508 python3[73604]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:15:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:15:40 np0005539508 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 01:15:40 np0005539508 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 01:15:40 np0005539508 systemd-logind[797]: New session 19 of user ceph-admin.
Nov 29 01:15:40 np0005539508 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 01:15:40 np0005539508 systemd[1]: Starting User Manager for UID 42477...
Nov 29 01:15:40 np0005539508 systemd[73625]: Queued start job for default target Main User Target.
Nov 29 01:15:40 np0005539508 systemd[73625]: Created slice User Application Slice.
Nov 29 01:15:40 np0005539508 systemd[73625]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:15:40 np0005539508 systemd[73625]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:15:40 np0005539508 systemd[73625]: Reached target Paths.
Nov 29 01:15:40 np0005539508 systemd[73625]: Reached target Timers.
Nov 29 01:15:40 np0005539508 systemd[73625]: Starting D-Bus User Message Bus Socket...
Nov 29 01:15:40 np0005539508 systemd[73625]: Starting Create User's Volatile Files and Directories...
Nov 29 01:15:40 np0005539508 systemd[73625]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:15:40 np0005539508 systemd[73625]: Reached target Sockets.
Nov 29 01:15:40 np0005539508 systemd[73625]: Finished Create User's Volatile Files and Directories.
Nov 29 01:15:40 np0005539508 systemd[73625]: Reached target Basic System.
Nov 29 01:15:40 np0005539508 systemd[73625]: Reached target Main User Target.
Nov 29 01:15:40 np0005539508 systemd[73625]: Startup finished in 160ms.
Nov 29 01:15:40 np0005539508 systemd[1]: Started User Manager for UID 42477.
Nov 29 01:15:40 np0005539508 systemd[1]: Started Session 19 of User ceph-admin.
Nov 29 01:15:40 np0005539508 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Nov 29 01:15:40 np0005539508 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 01:15:40 np0005539508 systemd-logind[797]: Removed session 19.
Nov 29 01:15:43 np0005539508 systemd[1]: var-lib-containers-storage-overlay-compat1725561353-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 01:15:51 np0005539508 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 01:15:51 np0005539508 systemd[73625]: Activating special unit Exit the Session...
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped target Main User Target.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped target Basic System.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped target Paths.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped target Sockets.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped target Timers.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 01:15:51 np0005539508 systemd[73625]: Closed D-Bus User Message Bus Socket.
Nov 29 01:15:51 np0005539508 systemd[73625]: Stopped Create User's Volatile Files and Directories.
Nov 29 01:15:51 np0005539508 systemd[73625]: Removed slice User Application Slice.
Nov 29 01:15:51 np0005539508 systemd[73625]: Reached target Shutdown.
Nov 29 01:15:51 np0005539508 systemd[73625]: Finished Exit the Session.
Nov 29 01:15:51 np0005539508 systemd[73625]: Reached target Exit the Session.
Nov 29 01:15:51 np0005539508 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 01:15:51 np0005539508 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 01:15:51 np0005539508 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 01:15:51 np0005539508 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 01:15:51 np0005539508 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 01:15:51 np0005539508 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 01:15:51 np0005539508 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 01:15:59 np0005539508 podman[73679]: 2025-11-29 06:15:59.995662999 +0000 UTC m=+19.123457298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:15:59 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.073128117 +0000 UTC m=+0.046416358 container create acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:16:00 np0005539508 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1622958540-merged.mount: Deactivated successfully.
Nov 29 01:16:00 np0005539508 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 01:16:00 np0005539508 systemd[1]: Started libpod-conmon-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope.
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.05139746 +0000 UTC m=+0.024685721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:00 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.180961216 +0000 UTC m=+0.154249457 container init acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.189338534 +0000 UTC m=+0.162626785 container start acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.192935146 +0000 UTC m=+0.166223407 container attach acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:00 np0005539508 gallant_bohr[73786]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.491333052 +0000 UTC m=+0.464621343 container died acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:16:00 np0005539508 podman[73770]: 2025-11-29 06:16:00.547167336 +0000 UTC m=+0.520455607 container remove acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-conmon-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.623953384 +0000 UTC m=+0.053787977 container create cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:16:00 np0005539508 systemd[1]: Started libpod-conmon-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope.
Nov 29 01:16:00 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.680836058 +0000 UTC m=+0.110670701 container init cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.685219282 +0000 UTC m=+0.115053885 container start cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.688347551 +0000 UTC m=+0.118182164 container attach cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.595827806 +0000 UTC m=+0.025662519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:00 np0005539508 funny_jemison[73819]: 167 167
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.690514723 +0000 UTC m=+0.120349326 container died cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:00 np0005539508 podman[73802]: 2025-11-29 06:16:00.731993069 +0000 UTC m=+0.161827702 container remove cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-conmon-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.799084763 +0000 UTC m=+0.040713636 container create e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:16:00 np0005539508 systemd[1]: Started libpod-conmon-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope.
Nov 29 01:16:00 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.865056214 +0000 UTC m=+0.106685077 container init e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.871027884 +0000 UTC m=+0.112656767 container start e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.779136357 +0000 UTC m=+0.020765240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.875725257 +0000 UTC m=+0.117354130 container attach e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:16:00 np0005539508 trusting_babbage[73855]: AQCgjypp69I3NhAAR2bMWBw4r8XowKCsVsHPQw==
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope: Deactivated successfully.
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.914054975 +0000 UTC m=+0.155683818 container died e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:16:00 np0005539508 podman[73837]: 2025-11-29 06:16:00.952546977 +0000 UTC m=+0.194175820 container remove e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 01:16:00 np0005539508 systemd[1]: libpod-conmon-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1bb652b2846f5f6d97c8292a070fdb4a9590a81fb766a576419b5b0ebf30613e-merged.mount: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.057170595 +0000 UTC m=+0.073587779 container create 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:01 np0005539508 systemd[1]: Started libpod-conmon-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope.
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.029324955 +0000 UTC m=+0.045742179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.15745546 +0000 UTC m=+0.173872614 container init 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.163637685 +0000 UTC m=+0.180054869 container start 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.167834844 +0000 UTC m=+0.184252008 container attach 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:16:01 np0005539508 epic_dubinsky[73890]: AQChjyppOUhwCxAADdGewaDdp9HBbsTf1aZPoQ==
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.198458473 +0000 UTC m=+0.214875657 container died 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:16:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c22e25198297cf37cc6f3df5aad6148246e65c32ea74214eef98a0a4761b1ba5-merged.mount: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73874]: 2025-11-29 06:16:01.24556302 +0000 UTC m=+0.261980204 container remove 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:16:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-conmon-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.349979082 +0000 UTC m=+0.075292607 container create 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:01 np0005539508 systemd[1]: Started libpod-conmon-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope.
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.311816869 +0000 UTC m=+0.037130444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.444048631 +0000 UTC m=+0.169362166 container init 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.453254762 +0000 UTC m=+0.178568257 container start 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.45740585 +0000 UTC m=+0.182719365 container attach 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:01 np0005539508 jovial_thompson[73924]: AQChjypp97uGHBAAchmJk9cEjMyNqQhaf4l4Xw==
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.482583254 +0000 UTC m=+0.207896759 container died 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:16:01 np0005539508 podman[73908]: 2025-11-29 06:16:01.517978388 +0000 UTC m=+0.243291873 container remove 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-conmon-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.612081768 +0000 UTC m=+0.063292517 container create 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:16:01 np0005539508 systemd[1]: Started libpod-conmon-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope.
Nov 29 01:16:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c796d98103d2ba3058ed8d158cdae282291c4cf023038ab09440abdcfe11d28a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.68123223 +0000 UTC m=+0.132442989 container init 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.591482393 +0000 UTC m=+0.042693132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.687632431 +0000 UTC m=+0.138843150 container start 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.691544682 +0000 UTC m=+0.142755441 container attach 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:01 np0005539508 stoic_cannon[73957]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 01:16:01 np0005539508 stoic_cannon[73957]: setting min_mon_release = pacific
Nov 29 01:16:01 np0005539508 stoic_cannon[73957]: /usr/bin/monmaptool: set fsid to 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:01 np0005539508 stoic_cannon[73957]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.729661194 +0000 UTC m=+0.180871943 container died 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:16:01 np0005539508 podman[73941]: 2025-11-29 06:16:01.770820972 +0000 UTC m=+0.222031701 container remove 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:01 np0005539508 systemd[1]: libpod-conmon-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope: Deactivated successfully.
Nov 29 01:16:01 np0005539508 podman[73976]: 2025-11-29 06:16:01.850161892 +0000 UTC m=+0.051848161 container create c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 01:16:01 np0005539508 systemd[1]: Started libpod-conmon-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope.
Nov 29 01:16:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:01 np0005539508 podman[73976]: 2025-11-29 06:16:01.828017664 +0000 UTC m=+0.029703973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:01 np0005539508 podman[73976]: 2025-11-29 06:16:01.944819038 +0000 UTC m=+0.146505327 container init c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:01 np0005539508 podman[73976]: 2025-11-29 06:16:01.950919011 +0000 UTC m=+0.152605290 container start c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:01 np0005539508 podman[73976]: 2025-11-29 06:16:01.954858673 +0000 UTC m=+0.156544942 container attach c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:02 np0005539508 systemd[1]: libpod-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope: Deactivated successfully.
Nov 29 01:16:02 np0005539508 podman[73976]: 2025-11-29 06:16:02.052063401 +0000 UTC m=+0.253749750 container died c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87-merged.mount: Deactivated successfully.
Nov 29 01:16:02 np0005539508 podman[73976]: 2025-11-29 06:16:02.10490816 +0000 UTC m=+0.306594459 container remove c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:02 np0005539508 systemd[1]: libpod-conmon-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope: Deactivated successfully.
Nov 29 01:16:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:02 np0005539508 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 01:16:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:02 np0005539508 systemd[1]: Reached target Ceph cluster 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:03 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:03 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:03 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:03 np0005539508 systemd[1]: Created slice Slice /system/ceph-336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:03 np0005539508 systemd[1]: Reached target System Time Set.
Nov 29 01:16:03 np0005539508 systemd[1]: Reached target System Time Synchronized.
Nov 29 01:16:03 np0005539508 systemd[1]: Starting Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:16:03 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:03 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:03 np0005539508 podman[74272]: 2025-11-29 06:16:03.748281502 +0000 UTC m=+0.058576053 container create 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:03 np0005539508 podman[74272]: 2025-11-29 06:16:03.816367304 +0000 UTC m=+0.126661865 container init 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:16:03 np0005539508 podman[74272]: 2025-11-29 06:16:03.728918793 +0000 UTC m=+0.039213354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:03 np0005539508 podman[74272]: 2025-11-29 06:16:03.831209085 +0000 UTC m=+0.141503616 container start 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:16:03 np0005539508 bash[74272]: 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e
Nov 29 01:16:03 np0005539508 systemd[1]: Started Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: pidfile_write: ignore empty --pid-file
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: load: jerasure load: lrc 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: RocksDB version: 7.9.2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Git sha 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: DB SUMMARY
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: DB Session ID:  TJX3Q57MMVQ4ZHTA4ZSA
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: CURRENT file:  CURRENT
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                         Options.error_if_exists: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.create_if_missing: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                                     Options.env: 0x55bdf897dc40
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                                Options.info_log: 0x55bdf97e0ec0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                              Options.statistics: (nil)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                               Options.use_fsync: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                              Options.db_log_dir: 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                                 Options.wal_dir: 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                    Options.write_buffer_manager: 0x55bdf97f0b40
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.unordered_write: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                               Options.row_cache: None
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                              Options.wal_filter: None
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.two_write_queues: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.wal_compression: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.atomic_flush: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.max_background_jobs: 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.max_background_compactions: -1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.max_subcompactions: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                          Options.max_open_files: -1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Compression algorithms supported:
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kZSTD supported: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kXpressCompression supported: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kZlibCompression supported: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:           Options.merge_operator: 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:        Options.compaction_filter: None
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdf97e0aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdf97d91f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.compression: NoCompression
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.num_levels: 7
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963888190, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963890457, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "TJX3Q57MMVQ4ZHTA4ZSA", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963890609, "job": 1, "event": "recovery_finished"}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdf9802e00
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: DB pointer 0x55bdf988c000
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdf97d91f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@-1(???) e0 preinit fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T06:16:01.992828Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:16:03 np0005539508 podman[74294]: 2025-11-29 06:16:03.951407995 +0000 UTC m=+0.071438678 container create cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).mds e1 new map
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mkfs 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 01:16:03 np0005539508 ceph-mon[74293]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:04 np0005539508 systemd[1]: Started libpod-conmon-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope.
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:03.923157583 +0000 UTC m=+0.043188316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:04 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:04.086102376 +0000 UTC m=+0.206133089 container init cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:04.098483707 +0000 UTC m=+0.218514380 container start cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:04.102339737 +0000 UTC m=+0.222370420 container attach cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:16:04 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 01:16:04 np0005539508 ceph-mon[74293]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396279677' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:  cluster:
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    id:     336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    health: HEALTH_OK
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]: 
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:  services:
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    mon: 1 daemons, quorum compute-0 (age 0.56684s)
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    mgr: no daemons active
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    osd: 0 osds: 0 up, 0 in
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]: 
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:  data:
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    pools:   0 pools, 0 pgs
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    objects: 0 objects, 0 B
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    usage:   0 B used, 0 B / 0 B avail
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]:    pgs:     
Nov 29 01:16:04 np0005539508 interesting_poitras[74349]: 
Nov 29 01:16:04 np0005539508 systemd[1]: libpod-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope: Deactivated successfully.
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:04.530937896 +0000 UTC m=+0.650968609 container died cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:16:04 np0005539508 systemd[1]: var-lib-containers-storage-overlay-630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e-merged.mount: Deactivated successfully.
Nov 29 01:16:04 np0005539508 podman[74294]: 2025-11-29 06:16:04.59449413 +0000 UTC m=+0.714524783 container remove cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:04 np0005539508 systemd[1]: libpod-conmon-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope: Deactivated successfully.
Nov 29 01:16:04 np0005539508 podman[74387]: 2025-11-29 06:16:04.680477609 +0000 UTC m=+0.057840622 container create 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:04 np0005539508 systemd[1]: Started libpod-conmon-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope.
Nov 29 01:16:04 np0005539508 podman[74387]: 2025-11-29 06:16:04.652960488 +0000 UTC m=+0.030323541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:04 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:04 np0005539508 podman[74387]: 2025-11-29 06:16:04.781580087 +0000 UTC m=+0.158943150 container init 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:16:04 np0005539508 podman[74387]: 2025-11-29 06:16:04.793286359 +0000 UTC m=+0.170649342 container start 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:04 np0005539508 podman[74387]: 2025-11-29 06:16:04.797456398 +0000 UTC m=+0.174819381 container attach 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:04 np0005539508 ceph-mon[74293]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 01:16:05 np0005539508 pensive_kilby[74404]: 
Nov 29 01:16:05 np0005539508 pensive_kilby[74404]: [global]
Nov 29 01:16:05 np0005539508 pensive_kilby[74404]: #011fsid = 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:05 np0005539508 pensive_kilby[74404]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 01:16:05 np0005539508 systemd[1]: libpod-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope: Deactivated successfully.
Nov 29 01:16:05 np0005539508 podman[74387]: 2025-11-29 06:16:05.208494049 +0000 UTC m=+0.585857052 container died 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1-merged.mount: Deactivated successfully.
Nov 29 01:16:05 np0005539508 podman[74387]: 2025-11-29 06:16:05.250955264 +0000 UTC m=+0.628318237 container remove 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:05 np0005539508 systemd[1]: libpod-conmon-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope: Deactivated successfully.
Nov 29 01:16:05 np0005539508 podman[74441]: 2025-11-29 06:16:05.347215715 +0000 UTC m=+0.066459327 container create 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:16:05 np0005539508 systemd[1]: Started libpod-conmon-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope.
Nov 29 01:16:05 np0005539508 podman[74441]: 2025-11-29 06:16:05.319080116 +0000 UTC m=+0.038323788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:05 np0005539508 podman[74441]: 2025-11-29 06:16:05.437579348 +0000 UTC m=+0.156823010 container init 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:16:05 np0005539508 podman[74441]: 2025-11-29 06:16:05.447197111 +0000 UTC m=+0.166440683 container start 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:05 np0005539508 podman[74441]: 2025-11-29 06:16:05.451457782 +0000 UTC m=+0.170701384 container attach 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213909719' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:16:05 np0005539508 systemd[1]: libpod-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope: Deactivated successfully.
Nov 29 01:16:05 np0005539508 podman[74484]: 2025-11-29 06:16:05.904949537 +0000 UTC m=+0.024112945 container died 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:16:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133-merged.mount: Deactivated successfully.
Nov 29 01:16:05 np0005539508 podman[74484]: 2025-11-29 06:16:05.957124997 +0000 UTC m=+0.076288405 container remove 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:05 np0005539508 systemd[1]: libpod-conmon-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope: Deactivated successfully.
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:16:05 np0005539508 ceph-mon[74293]: from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 01:16:05 np0005539508 systemd[1]: Stopping Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:16:06 np0005539508 ceph-mon[74293]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 01:16:06 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 01:16:06 np0005539508 ceph-mon[74293]: mon.compute-0@0(leader) e1 shutdown
Nov 29 01:16:06 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74289]: 2025-11-29T06:16:06.235+0000 7f0f5d161640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 01:16:06 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74289]: 2025-11-29T06:16:06.235+0000 7f0f5d161640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 01:16:06 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 01:16:06 np0005539508 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 01:16:06 np0005539508 podman[74528]: 2025-11-29 06:16:06.266660839 +0000 UTC m=+0.083574162 container died 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:16:06 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7-merged.mount: Deactivated successfully.
Nov 29 01:16:06 np0005539508 podman[74528]: 2025-11-29 06:16:06.314026163 +0000 UTC m=+0.130939486 container remove 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:06 np0005539508 bash[74528]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0
Nov 29 01:16:06 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:06 np0005539508 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 01:16:06 np0005539508 systemd[1]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0.service: Deactivated successfully.
Nov 29 01:16:06 np0005539508 systemd[1]: Stopped Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:06 np0005539508 systemd[1]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0.service: Consumed 1.256s CPU time.
Nov 29 01:16:06 np0005539508 systemd[1]: Starting Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:16:06 np0005539508 podman[74634]: 2025-11-29 06:16:06.8493511 +0000 UTC m=+0.070406778 container create c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:06 np0005539508 podman[74634]: 2025-11-29 06:16:06.822005644 +0000 UTC m=+0.043061362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:06 np0005539508 podman[74634]: 2025-11-29 06:16:06.947839784 +0000 UTC m=+0.168895512 container init c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:16:06 np0005539508 podman[74634]: 2025-11-29 06:16:06.963494418 +0000 UTC m=+0.184550096 container start c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 01:16:06 np0005539508 bash[74634]: c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf
Nov 29 01:16:06 np0005539508 systemd[1]: Started Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: pidfile_write: ignore empty --pid-file
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: load: jerasure load: lrc 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: RocksDB version: 7.9.2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Git sha 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: DB SUMMARY
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: DB Session ID:  VL4WOW4AK06DDHF5VQBP
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: CURRENT file:  CURRENT
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                         Options.error_if_exists: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.create_if_missing: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                                     Options.env: 0x55e1a328cc40
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                                Options.info_log: 0x55e1a5839040
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                              Options.statistics: (nil)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                               Options.use_fsync: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                              Options.db_log_dir: 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                                 Options.wal_dir: 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                    Options.write_buffer_manager: 0x55e1a5848b40
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.unordered_write: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                               Options.row_cache: None
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                              Options.wal_filter: None
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.two_write_queues: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.wal_compression: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.atomic_flush: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.max_background_jobs: 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.max_background_compactions: -1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.max_subcompactions: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                          Options.max_open_files: -1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Compression algorithms supported:
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kZSTD supported: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kXpressCompression supported: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kZlibCompression supported: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:           Options.merge_operator: 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:        Options.compaction_filter: None
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1a5838c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e1a58311f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.compression: NoCompression
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.num_levels: 7
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967031602, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967036250, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396967, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967036384, "job": 1, "event": "recovery_finished"}
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e1a585ae00
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: DB pointer 0x55e1a58e4000
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???) e1 preinit fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).mds e1 new map
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.070762562 +0000 UTC m=+0.060384175 container create ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:07 np0005539508 systemd[1]: Started libpod-conmon-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope.
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.051982279 +0000 UTC m=+0.041603922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:07 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.178100987 +0000 UTC m=+0.167722630 container init ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.18774241 +0000 UTC m=+0.177364053 container start ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.19229297 +0000 UTC m=+0.181914583 container attach ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:16:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 01:16:07 np0005539508 systemd[1]: libpod-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope: Deactivated successfully.
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.580007219 +0000 UTC m=+0.569628912 container died ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:16:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852-merged.mount: Deactivated successfully.
Nov 29 01:16:07 np0005539508 podman[74655]: 2025-11-29 06:16:07.634162426 +0000 UTC m=+0.623784029 container remove ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:16:07 np0005539508 systemd[1]: libpod-conmon-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope: Deactivated successfully.
Nov 29 01:16:07 np0005539508 podman[74746]: 2025-11-29 06:16:07.697182833 +0000 UTC m=+0.043828524 container create 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:07 np0005539508 systemd[1]: Started libpod-conmon-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope.
Nov 29 01:16:07 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:07 np0005539508 podman[74746]: 2025-11-29 06:16:07.676579169 +0000 UTC m=+0.023224890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:07 np0005539508 podman[74746]: 2025-11-29 06:16:07.783473692 +0000 UTC m=+0.130119453 container init 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:16:07 np0005539508 podman[74746]: 2025-11-29 06:16:07.794104603 +0000 UTC m=+0.140750314 container start 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 01:16:07 np0005539508 podman[74746]: 2025-11-29 06:16:07.79858065 +0000 UTC m=+0.145226351 container attach 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 01:16:08 np0005539508 systemd[1]: libpod-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope: Deactivated successfully.
Nov 29 01:16:08 np0005539508 podman[74746]: 2025-11-29 06:16:08.196073567 +0000 UTC m=+0.542719278 container died 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:16:08 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63-merged.mount: Deactivated successfully.
Nov 29 01:16:08 np0005539508 podman[74746]: 2025-11-29 06:16:08.245772507 +0000 UTC m=+0.592418188 container remove 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:16:08 np0005539508 systemd[1]: libpod-conmon-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope: Deactivated successfully.
Nov 29 01:16:08 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:08 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:08 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:08 np0005539508 systemd[1]: Reloading.
Nov 29 01:16:08 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:08 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:16:08 np0005539508 systemd[1]: Starting Ceph mgr.compute-0.vxabpq for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:16:09 np0005539508 podman[74929]: 2025-11-29 06:16:09.110387207 +0000 UTC m=+0.052885892 container create 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/lib/ceph/mgr/ceph-compute-0.vxabpq supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 podman[74929]: 2025-11-29 06:16:09.081430345 +0000 UTC m=+0.023929070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:09 np0005539508 podman[74929]: 2025-11-29 06:16:09.192519467 +0000 UTC m=+0.135018212 container init 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:16:09 np0005539508 podman[74929]: 2025-11-29 06:16:09.210743844 +0000 UTC m=+0.153242519 container start 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:16:09 np0005539508 bash[74929]: 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee
Nov 29 01:16:09 np0005539508 systemd[1]: Started Ceph mgr.compute-0.vxabpq for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: pidfile_write: ignore empty --pid-file
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.311548244 +0000 UTC m=+0.055249769 container create fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:09 np0005539508 systemd[1]: Started libpod-conmon-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope.
Nov 29 01:16:09 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'alerts'
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.293047239 +0000 UTC m=+0.036748754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.406521578 +0000 UTC m=+0.150223103 container init fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.419128436 +0000 UTC m=+0.162829921 container start fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.422945694 +0000 UTC m=+0.166647229 container attach fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'balancer'
Nov 29 01:16:09 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:09.673+0000 7fa614c10140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 01:16:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806291629' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]: 
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]: {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "health": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "status": "HEALTH_OK",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "checks": {},
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "mutes": []
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "election_epoch": 5,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "quorum": [
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        0
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    ],
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "quorum_names": [
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "compute-0"
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    ],
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "quorum_age": 2,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "monmap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "epoch": 1,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "min_mon_release_name": "reef",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_mons": 1
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "osdmap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "epoch": 1,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_osds": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_up_osds": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "osd_up_since": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_in_osds": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "osd_in_since": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_remapped_pgs": 0
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "pgmap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "pgs_by_state": [],
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_pgs": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_pools": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_objects": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "data_bytes": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "bytes_used": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "bytes_avail": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "bytes_total": 0
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "fsmap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "epoch": 1,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "by_rank": [],
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "up:standby": 0
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "mgrmap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "available": false,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "num_standbys": 0,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "modules": [
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:            "iostat",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:            "nfs",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:            "restful"
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        ],
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "services": {}
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "servicemap": {
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "epoch": 1,
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:        "services": {}
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    },
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]:    "progress_events": {}
Nov 29 01:16:09 np0005539508 ecstatic_mestorf[74990]: }
Nov 29 01:16:09 np0005539508 systemd[1]: libpod-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope: Deactivated successfully.
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.816218731 +0000 UTC m=+0.559920216 container died fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:09 np0005539508 systemd[1]: var-lib-containers-storage-overlay-142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807-merged.mount: Deactivated successfully.
Nov 29 01:16:09 np0005539508 podman[74949]: 2025-11-29 06:16:09.854456665 +0000 UTC m=+0.598158150 container remove fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 01:16:09 np0005539508 systemd[1]: libpod-conmon-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope: Deactivated successfully.
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 01:16:09 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'cephadm'
Nov 29 01:16:09 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:09.919+0000 7fa614c10140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 01:16:11 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'crash'
Nov 29 01:16:11 np0005539508 podman[75038]: 2025-11-29 06:16:11.956355017 +0000 UTC m=+0.070828640 container create f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:16:12 np0005539508 systemd[1]: Started libpod-conmon-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope.
Nov 29 01:16:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:11.92542853 +0000 UTC m=+0.039902203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:12.032656002 +0000 UTC m=+0.147129605 container init f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:12.037832199 +0000 UTC m=+0.152305792 container start f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:12.040917246 +0000 UTC m=+0.155390839 container attach f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:16:12 np0005539508 ceph-mgr[74948]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 01:16:12 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'dashboard'
Nov 29 01:16:12 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:12.074+0000 7fa614c10140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 01:16:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295402507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]: 
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]: {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "health": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "status": "HEALTH_OK",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "checks": {},
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "mutes": []
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "election_epoch": 5,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "quorum": [
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        0
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    ],
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "quorum_names": [
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "compute-0"
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    ],
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "quorum_age": 5,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "monmap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "epoch": 1,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "min_mon_release_name": "reef",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_mons": 1
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "osdmap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "epoch": 1,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_osds": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_up_osds": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "osd_up_since": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_in_osds": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "osd_in_since": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_remapped_pgs": 0
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "pgmap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "pgs_by_state": [],
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_pgs": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_pools": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_objects": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "data_bytes": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "bytes_used": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "bytes_avail": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "bytes_total": 0
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "fsmap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "epoch": 1,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "by_rank": [],
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "up:standby": 0
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "mgrmap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "available": false,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "num_standbys": 0,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "modules": [
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:            "iostat",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:            "nfs",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:            "restful"
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        ],
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "services": {}
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "servicemap": {
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "epoch": 1,
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:        "services": {}
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    },
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]:    "progress_events": {}
Nov 29 01:16:12 np0005539508 admiring_hopper[75055]: }
Nov 29 01:16:12 np0005539508 systemd[1]: libpod-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope: Deactivated successfully.
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:12.48552515 +0000 UTC m=+0.599998743 container died f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:12 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e-merged.mount: Deactivated successfully.
Nov 29 01:16:12 np0005539508 podman[75038]: 2025-11-29 06:16:12.541268322 +0000 UTC m=+0.655741925 container remove f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 01:16:12 np0005539508 systemd[1]: libpod-conmon-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope: Deactivated successfully.
Nov 29 01:16:13 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'devicehealth'
Nov 29 01:16:13 np0005539508 ceph-mgr[74948]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 01:16:13 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 01:16:13 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:13.642+0000 7fa614c10140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  from numpy import show_config as show_numpy_config
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'influx'
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.135+0000 7fa614c10140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'insights'
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.365+0000 7fa614c10140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'iostat'
Nov 29 01:16:14 np0005539508 podman[75092]: 2025-11-29 06:16:14.63133696 +0000 UTC m=+0.067486289 container create a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:14 np0005539508 systemd[1]: Started libpod-conmon-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope.
Nov 29 01:16:14 np0005539508 podman[75092]: 2025-11-29 06:16:14.593847581 +0000 UTC m=+0.029996940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:14 np0005539508 podman[75092]: 2025-11-29 06:16:14.729770781 +0000 UTC m=+0.165920190 container init a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 01:16:14 np0005539508 podman[75092]: 2025-11-29 06:16:14.739956449 +0000 UTC m=+0.176105828 container start a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:16:14 np0005539508 podman[75092]: 2025-11-29 06:16:14.745125115 +0000 UTC m=+0.181274484 container attach a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.851+0000 7fa614c10140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 01:16:14 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'k8sevents'
Nov 29 01:16:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107029855' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]: 
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]: {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "health": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "status": "HEALTH_OK",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "checks": {},
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "mutes": []
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "election_epoch": 5,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "quorum": [
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        0
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    ],
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "quorum_names": [
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "compute-0"
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    ],
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "quorum_age": 8,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "monmap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "epoch": 1,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "min_mon_release_name": "reef",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_mons": 1
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "osdmap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "epoch": 1,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_osds": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_up_osds": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "osd_up_since": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_in_osds": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "osd_in_since": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_remapped_pgs": 0
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "pgmap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "pgs_by_state": [],
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_pgs": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_pools": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_objects": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "data_bytes": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "bytes_used": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "bytes_avail": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "bytes_total": 0
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "fsmap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "epoch": 1,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "by_rank": [],
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "up:standby": 0
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "mgrmap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "available": false,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "num_standbys": 0,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "modules": [
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:            "iostat",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:            "nfs",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:            "restful"
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        ],
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "services": {}
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "servicemap": {
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "epoch": 1,
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:        "services": {}
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    },
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]:    "progress_events": {}
Nov 29 01:16:15 np0005539508 focused_bardeen[75108]: }
Nov 29 01:16:15 np0005539508 systemd[1]: libpod-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope: Deactivated successfully.
Nov 29 01:16:15 np0005539508 podman[75092]: 2025-11-29 06:16:15.155693215 +0000 UTC m=+0.591842584 container died a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761-merged.mount: Deactivated successfully.
Nov 29 01:16:15 np0005539508 podman[75092]: 2025-11-29 06:16:15.214477266 +0000 UTC m=+0.650626635 container remove a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:15 np0005539508 systemd[1]: libpod-conmon-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope: Deactivated successfully.
Nov 29 01:16:16 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'localpool'
Nov 29 01:16:16 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.330523632 +0000 UTC m=+0.077904262 container create cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:16:17 np0005539508 systemd[1]: Started libpod-conmon-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope.
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.299833425 +0000 UTC m=+0.047214095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:17 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:17 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'mirroring'
Nov 29 01:16:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.433237944 +0000 UTC m=+0.180618594 container init cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.442098275 +0000 UTC m=+0.189478915 container start cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.446261882 +0000 UTC m=+0.193642522 container attach cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:16:17 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'nfs'
Nov 29 01:16:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524573884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]: 
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]: {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "health": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "status": "HEALTH_OK",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "checks": {},
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "mutes": []
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "election_epoch": 5,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "quorum": [
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        0
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    ],
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "quorum_names": [
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "compute-0"
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    ],
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "quorum_age": 10,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "monmap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "epoch": 1,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "min_mon_release_name": "reef",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_mons": 1
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "osdmap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "epoch": 1,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_osds": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_up_osds": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "osd_up_since": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_in_osds": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "osd_in_since": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_remapped_pgs": 0
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "pgmap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "pgs_by_state": [],
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_pgs": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_pools": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_objects": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "data_bytes": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "bytes_used": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "bytes_avail": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "bytes_total": 0
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "fsmap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "epoch": 1,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "by_rank": [],
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "up:standby": 0
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "mgrmap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "available": false,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "num_standbys": 0,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "modules": [
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:            "iostat",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:            "nfs",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:            "restful"
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        ],
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "services": {}
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "servicemap": {
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "epoch": 1,
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:        "services": {}
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    },
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]:    "progress_events": {}
Nov 29 01:16:17 np0005539508 elastic_brahmagupta[75162]: }
Nov 29 01:16:17 np0005539508 systemd[1]: libpod-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope: Deactivated successfully.
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.85141829 +0000 UTC m=+0.598798950 container died cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:16:17 np0005539508 systemd[1]: var-lib-containers-storage-overlay-584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b-merged.mount: Deactivated successfully.
Nov 29 01:16:17 np0005539508 podman[75146]: 2025-11-29 06:16:17.915625604 +0000 UTC m=+0.663006244 container remove cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:17 np0005539508 systemd[1]: libpod-conmon-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope: Deactivated successfully.
Nov 29 01:16:18 np0005539508 ceph-mgr[74948]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 01:16:18 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'orchestrator'
Nov 29 01:16:18 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:18.355+0000 7fa614c10140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 01:16:19 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.039+0000 7fa614c10140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'osd_support'
Nov 29 01:16:19 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.351+0000 7fa614c10140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 01:16:19 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.574+0000 7fa614c10140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'progress'
Nov 29 01:16:19 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.843+0000 7fa614c10140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 01:16:19 np0005539508 podman[75202]: 2025-11-29 06:16:19.99728857 +0000 UTC m=+0.051583889 container create 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:16:20 np0005539508 systemd[1]: Started libpod-conmon-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope.
Nov 29 01:16:20 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:19.981398251 +0000 UTC m=+0.035693590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:20 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:20.097+0000 7fa614c10140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 01:16:20 np0005539508 ceph-mgr[74948]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 01:16:20 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'prometheus'
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:20.101017 +0000 UTC m=+0.155312409 container init 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:20.110410346 +0000 UTC m=+0.164705675 container start 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:20.11408527 +0000 UTC m=+0.168380679 container attach 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362093507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]: 
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]: {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "health": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "status": "HEALTH_OK",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "checks": {},
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "mutes": []
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "election_epoch": 5,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "quorum": [
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        0
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    ],
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "quorum_names": [
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "compute-0"
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    ],
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "quorum_age": 13,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "monmap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "epoch": 1,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "min_mon_release_name": "reef",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_mons": 1
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "osdmap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "epoch": 1,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_osds": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_up_osds": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "osd_up_since": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_in_osds": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "osd_in_since": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_remapped_pgs": 0
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "pgmap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "pgs_by_state": [],
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_pgs": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_pools": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_objects": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "data_bytes": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "bytes_used": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "bytes_avail": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "bytes_total": 0
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "fsmap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "epoch": 1,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "by_rank": [],
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "up:standby": 0
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "mgrmap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "available": false,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "num_standbys": 0,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "modules": [
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:            "iostat",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:            "nfs",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:            "restful"
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        ],
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "services": {}
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "servicemap": {
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "epoch": 1,
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:        "services": {}
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    },
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]:    "progress_events": {}
Nov 29 01:16:20 np0005539508 infallible_mccarthy[75217]: }
Nov 29 01:16:20 np0005539508 systemd[1]: libpod-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope: Deactivated successfully.
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:20.512467465 +0000 UTC m=+0.566762824 container died 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:20 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3-merged.mount: Deactivated successfully.
Nov 29 01:16:20 np0005539508 podman[75202]: 2025-11-29 06:16:20.569230728 +0000 UTC m=+0.623526047 container remove 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:20 np0005539508 systemd[1]: libpod-conmon-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope: Deactivated successfully.
Nov 29 01:16:21 np0005539508 ceph-mgr[74948]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 01:16:21 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rbd_support'
Nov 29 01:16:21 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:21.016+0000 7fa614c10140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 01:16:21 np0005539508 ceph-mgr[74948]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 01:16:21 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:21.300+0000 7fa614c10140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 01:16:21 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'restful'
Nov 29 01:16:21 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rgw'
Nov 29 01:16:22 np0005539508 podman[75257]: 2025-11-29 06:16:22.643809314 +0000 UTC m=+0.048518232 container create a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:22 np0005539508 systemd[1]: Started libpod-conmon-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope.
Nov 29 01:16:22 np0005539508 ceph-mgr[74948]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 01:16:22 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rook'
Nov 29 01:16:22 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:22.691+0000 7fa614c10140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 01:16:22 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:22 np0005539508 podman[75257]: 2025-11-29 06:16:22.62277153 +0000 UTC m=+0.027480458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:22 np0005539508 podman[75257]: 2025-11-29 06:16:22.727671583 +0000 UTC m=+0.132380571 container init a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:16:22 np0005539508 podman[75257]: 2025-11-29 06:16:22.740542217 +0000 UTC m=+0.145251125 container start a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:16:22 np0005539508 podman[75257]: 2025-11-29 06:16:22.832864526 +0000 UTC m=+0.237573544 container attach a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:16:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379746660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]: 
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]: {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "health": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "status": "HEALTH_OK",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "checks": {},
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "mutes": []
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "election_epoch": 5,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "quorum": [
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        0
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    ],
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "quorum_names": [
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "compute-0"
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    ],
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "quorum_age": 16,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "monmap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "epoch": 1,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "min_mon_release_name": "reef",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_mons": 1
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "osdmap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "epoch": 1,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_osds": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_up_osds": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "osd_up_since": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_in_osds": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "osd_in_since": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_remapped_pgs": 0
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "pgmap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "pgs_by_state": [],
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_pgs": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_pools": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_objects": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "data_bytes": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "bytes_used": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "bytes_avail": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "bytes_total": 0
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "fsmap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "epoch": 1,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "by_rank": [],
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "up:standby": 0
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "mgrmap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "available": false,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "num_standbys": 0,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "modules": [
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:            "iostat",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:            "nfs",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:            "restful"
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        ],
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "services": {}
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "servicemap": {
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "epoch": 1,
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:        "services": {}
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    },
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]:    "progress_events": {}
Nov 29 01:16:23 np0005539508 tender_dhawan[75273]: }
Nov 29 01:16:23 np0005539508 systemd[1]: libpod-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope: Deactivated successfully.
Nov 29 01:16:23 np0005539508 podman[75257]: 2025-11-29 06:16:23.152165237 +0000 UTC m=+0.556874145 container died a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6-merged.mount: Deactivated successfully.
Nov 29 01:16:23 np0005539508 podman[75257]: 2025-11-29 06:16:23.194140303 +0000 UTC m=+0.598849211 container remove a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:23 np0005539508 systemd[1]: libpod-conmon-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope: Deactivated successfully.
Nov 29 01:16:24 np0005539508 ceph-mgr[74948]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 01:16:24 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'selftest'
Nov 29 01:16:24 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:24.640+0000 7fa614c10140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 01:16:24 np0005539508 ceph-mgr[74948]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 01:16:24 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'snap_schedule'
Nov 29 01:16:24 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:24.860+0000 7fa614c10140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.086+0000 7fa614c10140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'stats'
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.270800386 +0000 UTC m=+0.054140310 container create 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'status'
Nov 29 01:16:25 np0005539508 systemd[1]: Started libpod-conmon-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope.
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.24157297 +0000 UTC m=+0.024912914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:25 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.372340715 +0000 UTC m=+0.155680659 container init 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.378618643 +0000 UTC m=+0.161958577 container start 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.383373817 +0000 UTC m=+0.166713771 container attach 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'telegraf'
Nov 29 01:16:25 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.577+0000 7fa614c10140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'telemetry'
Nov 29 01:16:25 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.843+0000 7fa614c10140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 01:16:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378257284' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]: 
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]: {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "health": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "status": "HEALTH_OK",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "checks": {},
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "mutes": []
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "election_epoch": 5,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "quorum": [
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        0
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    ],
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "quorum_names": [
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "compute-0"
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    ],
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "quorum_age": 18,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "monmap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "epoch": 1,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "min_mon_release_name": "reef",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_mons": 1
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "osdmap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "epoch": 1,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_osds": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_up_osds": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "osd_up_since": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_in_osds": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "osd_in_since": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_remapped_pgs": 0
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "pgmap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "pgs_by_state": [],
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_pgs": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_pools": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_objects": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "data_bytes": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "bytes_used": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "bytes_avail": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "bytes_total": 0
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "fsmap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "epoch": 1,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "by_rank": [],
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "up:standby": 0
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "mgrmap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "available": false,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "num_standbys": 0,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "modules": [
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:            "iostat",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:            "nfs",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:            "restful"
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        ],
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "services": {}
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "servicemap": {
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "epoch": 1,
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:        "services": {}
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    },
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]:    "progress_events": {}
Nov 29 01:16:25 np0005539508 hopeful_curran[75330]: }
Nov 29 01:16:25 np0005539508 systemd[1]: libpod-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope: Deactivated successfully.
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.866311812 +0000 UTC m=+0.649651746 container died 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:25 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921-merged.mount: Deactivated successfully.
Nov 29 01:16:25 np0005539508 podman[75313]: 2025-11-29 06:16:25.933521181 +0000 UTC m=+0.716861115 container remove 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 01:16:25 np0005539508 systemd[1]: libpod-conmon-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope: Deactivated successfully.
Nov 29 01:16:26 np0005539508 ceph-mgr[74948]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 01:16:26 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 01:16:26 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:26.408+0000 7fa614c10140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'volumes'
Nov 29 01:16:27 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.020+0000 7fa614c10140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'zabbix'
Nov 29 01:16:27 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.689+0000 7fa614c10140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.909+0000 7fa614c10140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 01:16:27 np0005539508 ceph-mgr[74948]: ms_deliver_dispatch: unhandled message 0x55dc33b48f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 01:16:27 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vxabpq
Nov 29 01:16:28 np0005539508 podman[75369]: 2025-11-29 06:16:28.00826553 +0000 UTC m=+0.041511274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.199079606 +0000 UTC m=+1.232325260 container create 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vxabpq(active, starting, since 1.29032s)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr handle_mgr_map Activating!
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr handle_mgr_map I am now activating
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: balancer
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer INFO root] Starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:16:29
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Manager daemon compute-0.vxabpq is now available
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: crash
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: devicehealth
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: iostat
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: nfs
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: orchestrator
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: pg_autoscaler
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: progress
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [progress INFO root] Loading...
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [progress INFO root] No stored events to load
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [progress INFO root] Loaded [] historic events
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 01:16:29 np0005539508 systemd[1]: Started libpod-conmon-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope.
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] recovery thread starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] starting setup
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: rbd_support
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: Activating manager daemon compute-0.vxabpq
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: Manager daemon compute-0.vxabpq is now available
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: restful
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: status
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: telemetry
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [restful WARNING root] server not running: no certificate configured
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] PerfHandler: starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TaskHandler: starting
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] setup complete
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:29 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 01:16:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:29 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: volumes
Nov 29 01:16:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.295731977 +0000 UTC m=+1.328977651 container init 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.304746831 +0000 UTC m=+1.337992485 container start 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.308694733 +0000 UTC m=+1.341940437 container attach 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668231799' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]: 
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]: {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "health": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "status": "HEALTH_OK",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "checks": {},
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "mutes": []
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "election_epoch": 5,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "quorum": [
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        0
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    ],
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "quorum_names": [
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "compute-0"
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    ],
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "quorum_age": 22,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "monmap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "epoch": 1,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "min_mon_release_name": "reef",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_mons": 1
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "osdmap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "epoch": 1,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_osds": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_up_osds": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "osd_up_since": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_in_osds": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "osd_in_since": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_remapped_pgs": 0
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "pgmap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "pgs_by_state": [],
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_pgs": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_pools": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_objects": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "data_bytes": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "bytes_used": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "bytes_avail": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "bytes_total": 0
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "fsmap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "epoch": 1,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "by_rank": [],
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "up:standby": 0
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "mgrmap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "available": false,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "num_standbys": 0,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "modules": [
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:            "iostat",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:            "nfs",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:            "restful"
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        ],
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "services": {}
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "servicemap": {
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "epoch": 1,
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:        "services": {}
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    },
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]:    "progress_events": {}
Nov 29 01:16:29 np0005539508 suspicious_allen[75419]: }
Nov 29 01:16:29 np0005539508 systemd[1]: libpod-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope: Deactivated successfully.
Nov 29 01:16:29 np0005539508 conmon[75419]: conmon 7c56bc0b4df2186584d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope/container/memory.events
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.736421908 +0000 UTC m=+1.769667602 container died 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:16:29 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe-merged.mount: Deactivated successfully.
Nov 29 01:16:29 np0005539508 podman[75369]: 2025-11-29 06:16:29.795634971 +0000 UTC m=+1.828880655 container remove 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:16:29 np0005539508 systemd[1]: libpod-conmon-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope: Deactivated successfully.
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vxabpq(active, since 2s)
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:30 np0005539508 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:31 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:16:31 np0005539508 podman[75502]: 2025-11-29 06:16:31.897412376 +0000 UTC m=+0.068094965 container create e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:16:31 np0005539508 systemd[1]: Started libpod-conmon-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope.
Nov 29 01:16:31 np0005539508 podman[75502]: 2025-11-29 06:16:31.8710207 +0000 UTC m=+0.041703299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:32 np0005539508 podman[75502]: 2025-11-29 06:16:32.008762382 +0000 UTC m=+0.179444981 container init e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:32 np0005539508 podman[75502]: 2025-11-29 06:16:32.01825231 +0000 UTC m=+0.188934899 container start e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:32 np0005539508 podman[75502]: 2025-11-29 06:16:32.022593172 +0000 UTC m=+0.193275771 container attach e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:16:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 01:16:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531602317' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]: 
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]: {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "health": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "status": "HEALTH_OK",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "checks": {},
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "mutes": []
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "election_epoch": 5,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "quorum": [
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        0
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    ],
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "quorum_names": [
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "compute-0"
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    ],
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "quorum_age": 25,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "monmap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "epoch": 1,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "min_mon_release_name": "reef",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_mons": 1
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "osdmap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "epoch": 1,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_osds": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_up_osds": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "osd_up_since": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_in_osds": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "osd_in_since": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_remapped_pgs": 0
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "pgmap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "pgs_by_state": [],
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_pgs": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_pools": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_objects": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "data_bytes": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "bytes_used": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "bytes_avail": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "bytes_total": 0
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "fsmap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "epoch": 1,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "by_rank": [],
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "up:standby": 0
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "mgrmap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "available": true,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "num_standbys": 0,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "modules": [
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:            "iostat",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:            "nfs",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:            "restful"
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        ],
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "services": {}
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "servicemap": {
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "epoch": 1,
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:        "services": {}
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    },
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]:    "progress_events": {}
Nov 29 01:16:32 np0005539508 hungry_gagarin[75518]: }
Nov 29 01:16:32 np0005539508 systemd[1]: libpod-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope: Deactivated successfully.
Nov 29 01:16:32 np0005539508 podman[75502]: 2025-11-29 06:16:32.77680237 +0000 UTC m=+0.947484979 container died e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:16:32 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37-merged.mount: Deactivated successfully.
Nov 29 01:16:32 np0005539508 podman[75502]: 2025-11-29 06:16:32.831014262 +0000 UTC m=+1.001696821 container remove e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:32 np0005539508 systemd[1]: libpod-conmon-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope: Deactivated successfully.
Nov 29 01:16:32 np0005539508 podman[75556]: 2025-11-29 06:16:32.907213465 +0000 UTC m=+0.053903594 container create e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:32 np0005539508 systemd[1]: Started libpod-conmon-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope.
Nov 29 01:16:32 np0005539508 podman[75556]: 2025-11-29 06:16:32.881009804 +0000 UTC m=+0.027700013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:32 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:33 np0005539508 podman[75556]: 2025-11-29 06:16:33.019770205 +0000 UTC m=+0.166460344 container init e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:33 np0005539508 podman[75556]: 2025-11-29 06:16:33.028579774 +0000 UTC m=+0.175269883 container start e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:33 np0005539508 podman[75556]: 2025-11-29 06:16:33.032489815 +0000 UTC m=+0.179180014 container attach e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:33 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:16:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 01:16:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/60232043' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:16:33 np0005539508 systemd[1]: libpod-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope: Deactivated successfully.
Nov 29 01:16:33 np0005539508 podman[75556]: 2025-11-29 06:16:33.582938377 +0000 UTC m=+0.729628576 container died e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:16:33 np0005539508 systemd[1]: var-lib-containers-storage-overlay-00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c-merged.mount: Deactivated successfully.
Nov 29 01:16:33 np0005539508 podman[75556]: 2025-11-29 06:16:33.635858552 +0000 UTC m=+0.782548661 container remove e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:33 np0005539508 systemd[1]: libpod-conmon-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope: Deactivated successfully.
Nov 29 01:16:33 np0005539508 podman[75609]: 2025-11-29 06:16:33.740288333 +0000 UTC m=+0.059277306 container create a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:16:33 np0005539508 systemd[1]: Started libpod-conmon-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope.
Nov 29 01:16:33 np0005539508 podman[75609]: 2025-11-29 06:16:33.711698575 +0000 UTC m=+0.030687588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:33 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/60232043' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:16:33 np0005539508 podman[75609]: 2025-11-29 06:16:33.845439143 +0000 UTC m=+0.164428176 container init a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:33 np0005539508 podman[75609]: 2025-11-29 06:16:33.855741975 +0000 UTC m=+0.174730948 container start a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:16:33 np0005539508 podman[75609]: 2025-11-29 06:16:33.86052259 +0000 UTC m=+0.179511563 container attach a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:16:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 01:16:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 01:16:34 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 01:16:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  1: '-n'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  2: 'mgr.compute-0.vxabpq'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  3: '-f'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  4: '--setuser'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  5: 'ceph'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  6: '--setgroup'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  7: 'ceph'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: mgr respawn  exe_path /proc/self/exe
Nov 29 01:16:34 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vxabpq(active, since 6s)
Nov 29 01:16:34 np0005539508 systemd[1]: libpod-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope: Deactivated successfully.
Nov 29 01:16:34 np0005539508 podman[75609]: 2025-11-29 06:16:34.895403479 +0000 UTC m=+1.214392432 container died a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:16:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38-merged.mount: Deactivated successfully.
Nov 29 01:16:34 np0005539508 podman[75609]: 2025-11-29 06:16:34.950358051 +0000 UTC m=+1.269346994 container remove a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:34 np0005539508 systemd[1]: libpod-conmon-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope: Deactivated successfully.
Nov 29 01:16:34 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ignoring --setuser ceph since I am not root
Nov 29 01:16:34 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ignoring --setgroup ceph since I am not root
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 01:16:34 np0005539508 ceph-mgr[74948]: pidfile_write: ignore empty --pid-file
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.007719652 +0000 UTC m=+0.036848842 container create 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 01:16:35 np0005539508 systemd[1]: Started libpod-conmon-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope.
Nov 29 01:16:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.073427738 +0000 UTC m=+0.102556938 container init 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.08373957 +0000 UTC m=+0.112868760 container start 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:34.992730038 +0000 UTC m=+0.021859248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.088155004 +0000 UTC m=+0.117284194 container attach 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:16:35 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'alerts'
Nov 29 01:16:35 np0005539508 ceph-mgr[74948]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 01:16:35 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'balancer'
Nov 29 01:16:35 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:35.408+0000 7f91542c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 01:16:35 np0005539508 ceph-mgr[74948]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 01:16:35 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'cephadm'
Nov 29 01:16:35 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:35.669+0000 7f91542c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 01:16:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 01:16:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397641018' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]: {
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]:    "epoch": 4,
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]:    "available": true,
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]:    "active_name": "compute-0.vxabpq",
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]:    "num_standby": 0
Nov 29 01:16:35 np0005539508 nostalgic_wilbur[75706]: }
Nov 29 01:16:35 np0005539508 systemd[1]: libpod-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope: Deactivated successfully.
Nov 29 01:16:35 np0005539508 conmon[75706]: conmon 35d86aaa2001a400f0bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope/container/memory.events
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.71657287 +0000 UTC m=+0.745702100 container died 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:16:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b-merged.mount: Deactivated successfully.
Nov 29 01:16:35 np0005539508 podman[75666]: 2025-11-29 06:16:35.770449832 +0000 UTC m=+0.799579022 container remove 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:35 np0005539508 systemd[1]: libpod-conmon-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope: Deactivated successfully.
Nov 29 01:16:35 np0005539508 podman[75744]: 2025-11-29 06:16:35.848822706 +0000 UTC m=+0.048491651 container create b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:16:35 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 01:16:35 np0005539508 systemd[1]: Started libpod-conmon-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope.
Nov 29 01:16:35 np0005539508 podman[75744]: 2025-11-29 06:16:35.827341349 +0000 UTC m=+0.027010274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:36 np0005539508 podman[75744]: 2025-11-29 06:16:36.006679696 +0000 UTC m=+0.206348601 container init b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:36 np0005539508 podman[75744]: 2025-11-29 06:16:36.016799322 +0000 UTC m=+0.216468267 container start b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:36 np0005539508 podman[75744]: 2025-11-29 06:16:36.020922209 +0000 UTC m=+0.220591154 container attach b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:16:37 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'crash'
Nov 29 01:16:37 np0005539508 ceph-mgr[74948]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 01:16:37 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'dashboard'
Nov 29 01:16:37 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:37.853+0000 7f91542c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 01:16:39 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'devicehealth'
Nov 29 01:16:39 np0005539508 ceph-mgr[74948]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 01:16:39 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:39.522+0000 7f91542c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 01:16:39 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  from numpy import show_config as show_numpy_config
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.031+0000 7f91542c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'influx'
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.268+0000 7f91542c8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'insights'
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'iostat'
Nov 29 01:16:40 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.742+0000 7f91542c8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 01:16:40 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'k8sevents'
Nov 29 01:16:42 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'localpool'
Nov 29 01:16:42 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 01:16:43 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'mirroring'
Nov 29 01:16:43 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'nfs'
Nov 29 01:16:44 np0005539508 ceph-mgr[74948]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 01:16:44 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:44.320+0000 7f91542c8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 01:16:44 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'orchestrator'
Nov 29 01:16:44 np0005539508 ceph-mgr[74948]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:44 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:44.990+0000 7f91542c8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:44 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 01:16:45 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.236+0000 7f91542c8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'osd_support'
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.448+0000 7f91542c8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.707+0000 7f91542c8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'progress'
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.926+0000 7f91542c8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 01:16:45 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'prometheus'
Nov 29 01:16:46 np0005539508 ceph-mgr[74948]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 01:16:46 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:46.861+0000 7f91542c8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 01:16:46 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rbd_support'
Nov 29 01:16:47 np0005539508 ceph-mgr[74948]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 01:16:47 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:47.167+0000 7f91542c8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 01:16:47 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'restful'
Nov 29 01:16:47 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rgw'
Nov 29 01:16:48 np0005539508 ceph-mgr[74948]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 01:16:48 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'rook'
Nov 29 01:16:48 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:48.611+0000 7f91542c8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 01:16:50 np0005539508 ceph-mgr[74948]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 01:16:50 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:50.683+0000 7f91542c8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 01:16:50 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'selftest'
Nov 29 01:16:50 np0005539508 ceph-mgr[74948]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 01:16:50 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:50.915+0000 7f91542c8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 01:16:50 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'snap_schedule'
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.154+0000 7f91542c8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'stats'
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'status'
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.656+0000 7f91542c8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'telegraf'
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.894+0000 7f91542c8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 01:16:51 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'telemetry'
Nov 29 01:16:52 np0005539508 ceph-mgr[74948]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 01:16:52 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:52.508+0000 7f91542c8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 01:16:52 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 01:16:53 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:53.181+0000 7f91542c8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:53 np0005539508 ceph-mgr[74948]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 01:16:53 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'volumes'
Nov 29 01:16:53 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:53.907+0000 7f91542c8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 01:16:53 np0005539508 ceph-mgr[74948]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 01:16:53 np0005539508 ceph-mgr[74948]: mgr[py] Loading python module 'zabbix'
Nov 29 01:16:54 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:54.137+0000 7f91542c8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vxabpq restarted
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vxabpq
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: ms_deliver_dispatch: unhandled message 0x5648794d0420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vxabpq(active, starting, since 0.0144217s)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr handle_mgr_map Activating!
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr handle_mgr_map I am now activating
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: balancer
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Starting
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Manager daemon compute-0.vxabpq is now available
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:16:54
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: Active manager daemon compute-0.vxabpq restarted
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: Activating manager daemon compute-0.vxabpq
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: Manager daemon compute-0.vxabpq is now available
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: cephadm
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: crash
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: devicehealth
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: iostat
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: nfs
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: orchestrator
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: pg_autoscaler
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: progress
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [progress INFO root] Loading...
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [progress INFO root] No stored events to load
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [progress INFO root] Loaded [] historic events
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] recovery thread starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] starting setup
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: rbd_support
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: restful
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: status
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] PerfHandler: starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TaskHandler: starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: telemetry
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [restful WARNING root] server not running: no certificate configured
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"} v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] setup complete
Nov 29 01:16:54 np0005539508 ceph-mgr[74948]: mgr load Constructed class from module: volumes
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 01:16:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vxabpq(active, since 1.02504s)
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 01:16:55 np0005539508 infallible_mahavira[75760]: {
Nov 29 01:16:55 np0005539508 infallible_mahavira[75760]:    "mgrmap_epoch": 6,
Nov 29 01:16:55 np0005539508 infallible_mahavira[75760]:    "initialized": true
Nov 29 01:16:55 np0005539508 infallible_mahavira[75760]: }
Nov 29 01:16:55 np0005539508 systemd[1]: libpod-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope: Deactivated successfully.
Nov 29 01:16:55 np0005539508 conmon[75760]: conmon b3b4a9df478f449d160d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope/container/memory.events
Nov 29 01:16:55 np0005539508 podman[75744]: 2025-11-29 06:16:55.195836845 +0000 UTC m=+19.395505780 container died b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: Found migration_current of "None". Setting to last migration.
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093-merged.mount: Deactivated successfully.
Nov 29 01:16:55 np0005539508 podman[75744]: 2025-11-29 06:16:55.252803895 +0000 UTC m=+19.452472820 container remove b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:16:55 np0005539508 systemd[1]: libpod-conmon-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope: Deactivated successfully.
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.339447903 +0000 UTC m=+0.055181490 container create 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:55 np0005539508 systemd[1]: Started libpod-conmon-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope.
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 01:16:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.320566329 +0000 UTC m=+0.036299956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.422394546 +0000 UTC m=+0.138128233 container init 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.432170913 +0000 UTC m=+0.147904500 container start 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.435929919 +0000 UTC m=+0.151663536 container attach 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 01:16:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:16:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:16:55 np0005539508 systemd[1]: libpod-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope: Deactivated successfully.
Nov 29 01:16:55 np0005539508 podman[75922]: 2025-11-29 06:16:55.997808353 +0000 UTC m=+0.713541980 container died 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73-merged.mount: Deactivated successfully.
Nov 29 01:16:56 np0005539508 podman[75922]: 2025-11-29 06:16:56.044820332 +0000 UTC m=+0.760553919 container remove 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:56 np0005539508 systemd[1]: libpod-conmon-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope: Deactivated successfully.
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.097606883 +0000 UTC m=+0.033135147 container create df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:16:56 np0005539508 systemd[1]: Started libpod-conmon-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope.
Nov 29 01:16:56 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.171495511 +0000 UTC m=+0.107023805 container init df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.08368822 +0000 UTC m=+0.019216504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.183562412 +0000 UTC m=+0.119090716 container start df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.187693718 +0000 UTC m=+0.123222002 container attach df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_user
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_config
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 01:16:56 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 01:16:56 np0005539508 goofy_shaw[76018]: ssh user set to ceph-admin. sudo will be used
Nov 29 01:16:56 np0005539508 systemd[1]: libpod-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope: Deactivated successfully.
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.739014025 +0000 UTC m=+0.674542299 container died df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:16:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a-merged.mount: Deactivated successfully.
Nov 29 01:16:56 np0005539508 podman[76001]: 2025-11-29 06:16:56.783529703 +0000 UTC m=+0.719057967 container remove df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:16:56 np0005539508 systemd[1]: libpod-conmon-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope: Deactivated successfully.
Nov 29 01:16:56 np0005539508 podman[76054]: 2025-11-29 06:16:56.862992418 +0000 UTC m=+0.053590735 container create 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:16:56 np0005539508 systemd[1]: Started libpod-conmon-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope.
Nov 29 01:16:56 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:56 np0005539508 podman[76054]: 2025-11-29 06:16:56.84394273 +0000 UTC m=+0.034541097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:56 np0005539508 podman[76054]: 2025-11-29 06:16:56.942711591 +0000 UTC m=+0.133309978 container init 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:16:56 np0005539508 podman[76054]: 2025-11-29 06:16:56.953364292 +0000 UTC m=+0.143962609 container start 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:16:56 np0005539508 podman[76054]: 2025-11-29 06:16:56.957052886 +0000 UTC m=+0.147651273 container attach 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:16:56 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vxabpq(active, since 2s)
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919563 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:16:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:57 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 01:16:57 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 01:16:57 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Set ssh private key
Nov 29 01:16:57 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 01:16:57 np0005539508 systemd[1]: libpod-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope: Deactivated successfully.
Nov 29 01:16:57 np0005539508 podman[76054]: 2025-11-29 06:16:57.492630498 +0000 UTC m=+0.683228845 container died 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4-merged.mount: Deactivated successfully.
Nov 29 01:16:57 np0005539508 podman[76054]: 2025-11-29 06:16:57.617086955 +0000 UTC m=+0.807685282 container remove 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:16:57 np0005539508 systemd[1]: libpod-conmon-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope: Deactivated successfully.
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: Set ssh ssh_user
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: Set ssh ssh_config
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: ssh user set to ceph-admin. sudo will be used
Nov 29 01:16:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:57 np0005539508 podman[76111]: 2025-11-29 06:16:57.818122255 +0000 UTC m=+0.150823323 container create 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:16:57 np0005539508 systemd[1]: Started libpod-conmon-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope.
Nov 29 01:16:57 np0005539508 podman[76111]: 2025-11-29 06:16:57.786692747 +0000 UTC m=+0.119393905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:57 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:57 np0005539508 podman[76111]: 2025-11-29 06:16:57.921982649 +0000 UTC m=+0.254683737 container init 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:57 np0005539508 podman[76111]: 2025-11-29 06:16:57.932378363 +0000 UTC m=+0.265079461 container start 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:57 np0005539508 podman[76111]: 2025-11-29 06:16:57.937679273 +0000 UTC m=+0.270380371 container attach 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:16:58 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:16:58 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:16:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 01:16:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:58 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 01:16:58 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 01:16:58 np0005539508 systemd[1]: libpod-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope: Deactivated successfully.
Nov 29 01:16:58 np0005539508 podman[76111]: 2025-11-29 06:16:58.546278258 +0000 UTC m=+0.878979406 container died 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:16:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290-merged.mount: Deactivated successfully.
Nov 29 01:16:58 np0005539508 podman[76111]: 2025-11-29 06:16:58.605932944 +0000 UTC m=+0.938634042 container remove 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:58 np0005539508 systemd[1]: libpod-conmon-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope: Deactivated successfully.
Nov 29 01:16:58 np0005539508 podman[76165]: 2025-11-29 06:16:58.67943091 +0000 UTC m=+0.051224568 container create e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:58 np0005539508 systemd[1]: Started libpod-conmon-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope.
Nov 29 01:16:58 np0005539508 podman[76165]: 2025-11-29 06:16:58.65111011 +0000 UTC m=+0.022903808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:58 np0005539508 ceph-mon[74654]: Set ssh ssh_identity_key
Nov 29 01:16:58 np0005539508 ceph-mon[74654]: Set ssh private key
Nov 29 01:16:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:16:58 np0005539508 podman[76165]: 2025-11-29 06:16:58.768344442 +0000 UTC m=+0.140138090 container init e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:16:58 np0005539508 podman[76165]: 2025-11-29 06:16:58.775741651 +0000 UTC m=+0.147535309 container start e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 01:16:58 np0005539508 podman[76165]: 2025-11-29 06:16:58.779119397 +0000 UTC m=+0.150913025 container attach e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:16:59 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:16:59 np0005539508 suspicious_satoshi[76181]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCu3bC5CnVOgMysWq+5APHTlM4aYzGGPRuaV3Iz9UGDOn9EfOSq89ba1ZYrKrCZUB2ran/6pGsjYG31TA1fEj7PJxj/KMHUXZPzA2OnYhngrow0DJlXLpAZyXEwCWnSXvNoXJgb+Ud550Hwu3I6cIXLfNiV0PeJy/vqOcH6IW0WeciHm6OCzzqtJz1SMRN/s41/Nlg8V/IqDT9xPkxz1bW1KAPpe1jOvvKpmdePRsd8IecvcTFX0ywbbVem+dv1+PDlXrXvNoyjA2zfibRBbkB6Gw2SWYp2G9Qsbf7kC0gEGWMwu2/vZAmvK/6aqb/D0r9z7hBfCzNCJFRrXW5bgxPGJN8q6pAKG3Bl/lDCya3x1lb50Tzraucim153k+46ML+IQYfoWFY17Xaa/tIYvaveLDDhXojDehUqhh8JYX/vkDMT/QnViiDNmskGirYuZG8steVIDcpvNGVStwn1Hb4XyPDP5/mSaD1oHMM5wZNHnZJG8WxJmyooKqNxOZDjnB0= zuul@controller
Nov 29 01:16:59 np0005539508 systemd[1]: libpod-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope: Deactivated successfully.
Nov 29 01:16:59 np0005539508 podman[76165]: 2025-11-29 06:16:59.306988671 +0000 UTC m=+0.678782299 container died e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 01:16:59 np0005539508 systemd[1]: var-lib-containers-storage-overlay-583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63-merged.mount: Deactivated successfully.
Nov 29 01:16:59 np0005539508 podman[76165]: 2025-11-29 06:16:59.359494525 +0000 UTC m=+0.731288183 container remove e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:16:59 np0005539508 systemd[1]: libpod-conmon-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope: Deactivated successfully.
Nov 29 01:16:59 np0005539508 podman[76218]: 2025-11-29 06:16:59.434640208 +0000 UTC m=+0.048568193 container create 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:16:59 np0005539508 systemd[1]: Started libpod-conmon-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope.
Nov 29 01:16:59 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:16:59 np0005539508 podman[76218]: 2025-11-29 06:16:59.412308447 +0000 UTC m=+0.026236482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:16:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:59 np0005539508 podman[76218]: 2025-11-29 06:16:59.534299644 +0000 UTC m=+0.148227609 container init 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:16:59 np0005539508 podman[76218]: 2025-11-29 06:16:59.543467602 +0000 UTC m=+0.157395547 container start 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 01:16:59 np0005539508 podman[76218]: 2025-11-29 06:16:59.549329448 +0000 UTC m=+0.163257593 container attach 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:16:59 np0005539508 ceph-mon[74654]: Set ssh ssh_identity_pub
Nov 29 01:17:00 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:00 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:00 np0005539508 systemd-logind[797]: New session 21 of user ceph-admin.
Nov 29 01:17:00 np0005539508 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 01:17:00 np0005539508 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 01:17:00 np0005539508 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 01:17:00 np0005539508 systemd[1]: Starting User Manager for UID 42477...
Nov 29 01:17:00 np0005539508 systemd-logind[797]: New session 23 of user ceph-admin.
Nov 29 01:17:00 np0005539508 systemd[76267]: Queued start job for default target Main User Target.
Nov 29 01:17:00 np0005539508 systemd[76267]: Created slice User Application Slice.
Nov 29 01:17:00 np0005539508 systemd[76267]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:17:00 np0005539508 systemd[76267]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:17:00 np0005539508 systemd[76267]: Reached target Paths.
Nov 29 01:17:00 np0005539508 systemd[76267]: Reached target Timers.
Nov 29 01:17:00 np0005539508 systemd[76267]: Starting D-Bus User Message Bus Socket...
Nov 29 01:17:00 np0005539508 systemd[76267]: Starting Create User's Volatile Files and Directories...
Nov 29 01:17:00 np0005539508 systemd[76267]: Finished Create User's Volatile Files and Directories.
Nov 29 01:17:00 np0005539508 systemd[76267]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:17:00 np0005539508 systemd[76267]: Reached target Sockets.
Nov 29 01:17:00 np0005539508 systemd[76267]: Reached target Basic System.
Nov 29 01:17:00 np0005539508 systemd[76267]: Reached target Main User Target.
Nov 29 01:17:00 np0005539508 systemd[76267]: Startup finished in 152ms.
Nov 29 01:17:00 np0005539508 systemd[1]: Started User Manager for UID 42477.
Nov 29 01:17:00 np0005539508 systemd[1]: Started Session 21 of User ceph-admin.
Nov 29 01:17:00 np0005539508 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 01:17:01 np0005539508 systemd-logind[797]: New session 24 of user ceph-admin.
Nov 29 01:17:01 np0005539508 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 01:17:01 np0005539508 systemd-logind[797]: New session 25 of user ceph-admin.
Nov 29 01:17:01 np0005539508 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 01:17:01 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 01:17:01 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 01:17:01 np0005539508 systemd-logind[797]: New session 26 of user ceph-admin.
Nov 29 01:17:01 np0005539508 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 01:17:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052984 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:02 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:02 np0005539508 systemd-logind[797]: New session 27 of user ceph-admin.
Nov 29 01:17:02 np0005539508 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 01:17:02 np0005539508 systemd-logind[797]: New session 28 of user ceph-admin.
Nov 29 01:17:02 np0005539508 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 01:17:02 np0005539508 ceph-mon[74654]: Deploying cephadm binary to compute-0
Nov 29 01:17:03 np0005539508 systemd-logind[797]: New session 29 of user ceph-admin.
Nov 29 01:17:03 np0005539508 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 01:17:03 np0005539508 systemd-logind[797]: New session 30 of user ceph-admin.
Nov 29 01:17:03 np0005539508 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 01:17:03 np0005539508 systemd-logind[797]: New session 31 of user ceph-admin.
Nov 29 01:17:03 np0005539508 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 01:17:04 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:04 np0005539508 systemd-logind[797]: New session 32 of user ceph-admin.
Nov 29 01:17:04 np0005539508 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 01:17:04 np0005539508 systemd-logind[797]: New session 33 of user ceph-admin.
Nov 29 01:17:04 np0005539508 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 01:17:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:05 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Added host compute-0
Nov 29 01:17:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 01:17:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:17:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:17:05 np0005539508 heuristic_ramanujan[76234]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 01:17:05 np0005539508 systemd[1]: libpod-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope: Deactivated successfully.
Nov 29 01:17:05 np0005539508 podman[76218]: 2025-11-29 06:17:05.520789305 +0000 UTC m=+6.134717260 container died 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5-merged.mount: Deactivated successfully.
Nov 29 01:17:05 np0005539508 podman[76218]: 2025-11-29 06:17:05.720422946 +0000 UTC m=+6.334350921 container remove 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:17:05 np0005539508 systemd[1]: libpod-conmon-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope: Deactivated successfully.
Nov 29 01:17:05 np0005539508 podman[76983]: 2025-11-29 06:17:05.772911719 +0000 UTC m=+0.034825225 container create 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:17:05 np0005539508 systemd[1]: Started libpod-conmon-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope.
Nov 29 01:17:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:05 np0005539508 podman[76983]: 2025-11-29 06:17:05.842274149 +0000 UTC m=+0.104187685 container init 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:17:05 np0005539508 podman[76983]: 2025-11-29 06:17:05.847895747 +0000 UTC m=+0.109809263 container start 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:05 np0005539508 podman[76983]: 2025-11-29 06:17:05.851912601 +0000 UTC m=+0.113826147 container attach 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:17:05 np0005539508 podman[76983]: 2025-11-29 06:17:05.758685187 +0000 UTC m=+0.020598733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:05 np0005539508 podman[77032]: 2025-11-29 06:17:05.981667867 +0000 UTC m=+0.040303650 container create ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:06 np0005539508 systemd[1]: Started libpod-conmon-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope.
Nov 29 01:17:06 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:06.043905425 +0000 UTC m=+0.102541228 container init ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:06.048557487 +0000 UTC m=+0.107193270 container start ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:06.055959596 +0000 UTC m=+0.114595409 container attach ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:05.964702508 +0000 UTC m=+0.023338311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:06 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:06 np0005539508 funny_panini[77049]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 01:17:06 np0005539508 systemd[1]: libpod-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope: Deactivated successfully.
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:06.320579893 +0000 UTC m=+0.379215676 container died ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:06 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4f4b80ac5c0a549d5657d25485b578c474ef3894f522dfa187adfb6d294cf5e3-merged.mount: Deactivated successfully.
Nov 29 01:17:06 np0005539508 podman[77032]: 2025-11-29 06:17:06.364341379 +0000 UTC m=+0.422977162 container remove ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:06 np0005539508 systemd[1]: libpod-conmon-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope: Deactivated successfully.
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:06 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:06 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 01:17:06 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:06 np0005539508 relaxed_bose[77001]: Scheduled mon update...
Nov 29 01:17:06 np0005539508 systemd[1]: libpod-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope: Deactivated successfully.
Nov 29 01:17:06 np0005539508 podman[76983]: 2025-11-29 06:17:06.475737977 +0000 UTC m=+0.737651523 container died 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: Added host compute-0
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565-merged.mount: Deactivated successfully.
Nov 29 01:17:07 np0005539508 podman[76983]: 2025-11-29 06:17:07.401268766 +0000 UTC m=+1.663182332 container remove 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:17:07 np0005539508 systemd[1]: libpod-conmon-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope: Deactivated successfully.
Nov 29 01:17:07 np0005539508 podman[77208]: 2025-11-29 06:17:07.477302084 +0000 UTC m=+0.050895718 container create af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:07 np0005539508 systemd[1]: Started libpod-conmon-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope.
Nov 29 01:17:07 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:07 np0005539508 podman[77208]: 2025-11-29 06:17:07.455701434 +0000 UTC m=+0.029295078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:07 np0005539508 podman[77208]: 2025-11-29 06:17:07.557775438 +0000 UTC m=+0.131369092 container init af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:07 np0005539508 podman[77208]: 2025-11-29 06:17:07.564123157 +0000 UTC m=+0.137716801 container start af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 01:17:07 np0005539508 podman[77208]: 2025-11-29 06:17:07.567275886 +0000 UTC m=+0.140869510 container attach af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 01:17:07 np0005539508 ceph-mon[74654]: Saving service mon spec with placement count:5
Nov 29 01:17:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:08 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:08 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 01:17:08 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 01:17:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:17:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:08 np0005539508 mystifying_tu[77238]: Scheduled mgr update...
Nov 29 01:17:08 np0005539508 systemd[1]: libpod-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope: Deactivated successfully.
Nov 29 01:17:08 np0005539508 podman[77208]: 2025-11-29 06:17:08.110788493 +0000 UTC m=+0.684382127 container died af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:08 np0005539508 systemd[1]: var-lib-containers-storage-overlay-392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679-merged.mount: Deactivated successfully.
Nov 29 01:17:08 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:08 np0005539508 podman[77426]: 2025-11-29 06:17:08.175634255 +0000 UTC m=+0.072346135 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:08 np0005539508 podman[77208]: 2025-11-29 06:17:08.186389019 +0000 UTC m=+0.759982643 container remove af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:08 np0005539508 systemd[1]: libpod-conmon-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope: Deactivated successfully.
Nov 29 01:17:08 np0005539508 podman[77457]: 2025-11-29 06:17:08.264346931 +0000 UTC m=+0.054009397 container create d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:17:08 np0005539508 systemd[1]: Started libpod-conmon-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope.
Nov 29 01:17:08 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:08 np0005539508 podman[77457]: 2025-11-29 06:17:08.239276903 +0000 UTC m=+0.028939469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:08 np0005539508 podman[77457]: 2025-11-29 06:17:08.935172755 +0000 UTC m=+0.724835241 container init d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 01:17:08 np0005539508 podman[77457]: 2025-11-29 06:17:08.943235123 +0000 UTC m=+0.732897599 container start d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:17:09 np0005539508 podman[77457]: 2025-11-29 06:17:09.039791541 +0000 UTC m=+0.829454057 container attach d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: Saving service mgr spec with placement count:2
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:09 np0005539508 podman[77426]: 2025-11-29 06:17:09.098127399 +0000 UTC m=+0.994839279 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:09 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:09 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 01:17:09 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:17:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:09 np0005539508 heuristic_shirley[77473]: Scheduled crash update...
Nov 29 01:17:09 np0005539508 systemd[1]: libpod-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope: Deactivated successfully.
Nov 29 01:17:09 np0005539508 podman[77457]: 2025-11-29 06:17:09.566929165 +0000 UTC m=+1.356591671 container died d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:09 np0005539508 systemd[1]: var-lib-containers-storage-overlay-323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e-merged.mount: Deactivated successfully.
Nov 29 01:17:09 np0005539508 podman[77457]: 2025-11-29 06:17:09.618556163 +0000 UTC m=+1.408218629 container remove d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:09 np0005539508 systemd[1]: libpod-conmon-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope: Deactivated successfully.
Nov 29 01:17:09 np0005539508 podman[77640]: 2025-11-29 06:17:09.743638107 +0000 UTC m=+0.108605189 container create d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:17:09 np0005539508 podman[77640]: 2025-11-29 06:17:09.654712905 +0000 UTC m=+0.019680007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:09 np0005539508 systemd[1]: Started libpod-conmon-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope.
Nov 29 01:17:10 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:10 np0005539508 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77674 (sysctl)
Nov 29 01:17:10 np0005539508 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 01:17:10 np0005539508 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 01:17:10 np0005539508 podman[77640]: 2025-11-29 06:17:10.12030881 +0000 UTC m=+0.485275972 container init d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:10 np0005539508 podman[77640]: 2025-11-29 06:17:10.132111953 +0000 UTC m=+0.497079075 container start d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:17:10 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:10 np0005539508 podman[77640]: 2025-11-29 06:17:10.277899362 +0000 UTC m=+0.642866464 container attach d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: Saving service crash spec with placement *
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3953414744' entity='client.admin' 
Nov 29 01:17:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:10 np0005539508 systemd[1]: libpod-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope: Deactivated successfully.
Nov 29 01:17:10 np0005539508 podman[77640]: 2025-11-29 06:17:10.941393048 +0000 UTC m=+1.306360150 container died d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83-merged.mount: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[77640]: 2025-11-29 06:17:11.108211121 +0000 UTC m=+1.473178233 container remove d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:17:11 np0005539508 systemd[1]: libpod-conmon-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.18532334 +0000 UTC m=+0.047819802 container create 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:11 np0005539508 systemd[1]: Started libpod-conmon-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope.
Nov 29 01:17:11 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:11 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:11 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:11 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.164948954 +0000 UTC m=+0.027445476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.278200134 +0000 UTC m=+0.140696636 container init 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.288260278 +0000 UTC m=+0.150756740 container start 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.291535121 +0000 UTC m=+0.154031743 container attach 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.73003563 +0000 UTC m=+0.060521291 container create c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:17:11 np0005539508 systemd[1]: Started libpod-conmon-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope.
Nov 29 01:17:11 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.705031244 +0000 UTC m=+0.035516915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.805617866 +0000 UTC m=+0.136103517 container init c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.815190306 +0000 UTC m=+0.145675977 container start c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 01:17:11 np0005539508 crazy_zhukovsky[78046]: 167 167
Nov 29 01:17:11 np0005539508 systemd[1]: libpod-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.821412742 +0000 UTC m=+0.151898413 container attach c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.822315988 +0000 UTC m=+0.152801659 container died c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:11 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8e0f9cc1eb518a151bfc5d4a4cd582748020198f43dc90119f25e3e736df6d83-merged.mount: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[78029]: 2025-11-29 06:17:11.875159271 +0000 UTC m=+0.205644932 container remove c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 01:17:11 np0005539508 systemd[1]: libpod-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.883858057 +0000 UTC m=+0.746354559 container died 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:17:11 np0005539508 systemd[1]: libpod-conmon-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope: Deactivated successfully.
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3953414744' entity='client.admin' 
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf-merged.mount: Deactivated successfully.
Nov 29 01:17:11 np0005539508 podman[77871]: 2025-11-29 06:17:11.935433834 +0000 UTC m=+0.797930306 container remove 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:11 np0005539508 systemd[1]: libpod-conmon-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope: Deactivated successfully.
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.005159274 +0000 UTC m=+0.048518342 container create 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:17:12 np0005539508 systemd[1]: Started libpod-conmon-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope.
Nov 29 01:17:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:11.984767598 +0000 UTC m=+0.028126696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.081265074 +0000 UTC m=+0.124624152 container init 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.086643876 +0000 UTC m=+0.130002924 container start 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.089679952 +0000 UTC m=+0.133039000 container attach 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:12 np0005539508 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 01:17:12 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:12 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 01:17:12 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 01:17:12 np0005539508 ecstatic_satoshi[78096]: Added label _admin to host compute-0
Nov 29 01:17:12 np0005539508 systemd[1]: libpod-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope: Deactivated successfully.
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.662019853 +0000 UTC m=+0.705378901 container died 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:17:12 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e-merged.mount: Deactivated successfully.
Nov 29 01:17:12 np0005539508 podman[78080]: 2025-11-29 06:17:12.697743422 +0000 UTC m=+0.741102470 container remove 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:12 np0005539508 systemd[1]: libpod-conmon-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope: Deactivated successfully.
Nov 29 01:17:12 np0005539508 podman[78134]: 2025-11-29 06:17:12.755190625 +0000 UTC m=+0.038895220 container create a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:12 np0005539508 systemd[1]: Started libpod-conmon-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope.
Nov 29 01:17:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:12 np0005539508 podman[78134]: 2025-11-29 06:17:12.82223729 +0000 UTC m=+0.105941965 container init a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:17:12 np0005539508 podman[78134]: 2025-11-29 06:17:12.827021235 +0000 UTC m=+0.110725820 container start a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:12 np0005539508 podman[78134]: 2025-11-29 06:17:12.831257734 +0000 UTC m=+0.114962339 container attach a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 01:17:12 np0005539508 podman[78134]: 2025-11-29 06:17:12.737748213 +0000 UTC m=+0.021452838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 01:17:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2360873568' entity='client.admin' 
Nov 29 01:17:13 np0005539508 systemd[1]: libpod-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope: Deactivated successfully.
Nov 29 01:17:13 np0005539508 podman[78134]: 2025-11-29 06:17:13.380235675 +0000 UTC m=+0.663940260 container died a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963-merged.mount: Deactivated successfully.
Nov 29 01:17:13 np0005539508 podman[78134]: 2025-11-29 06:17:13.4253631 +0000 UTC m=+0.709067705 container remove a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:17:13 np0005539508 systemd[1]: libpod-conmon-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope: Deactivated successfully.
Nov 29 01:17:13 np0005539508 podman[78192]: 2025-11-29 06:17:13.481816095 +0000 UTC m=+0.036792530 container create 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:17:13 np0005539508 systemd[1]: Started libpod-conmon-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope.
Nov 29 01:17:13 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:13 np0005539508 podman[78192]: 2025-11-29 06:17:13.542555832 +0000 UTC m=+0.097532287 container init 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:13 np0005539508 podman[78192]: 2025-11-29 06:17:13.550367192 +0000 UTC m=+0.105343637 container start 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:13 np0005539508 podman[78192]: 2025-11-29 06:17:13.553672496 +0000 UTC m=+0.108648951 container attach 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:13 np0005539508 podman[78192]: 2025-11-29 06:17:13.465528995 +0000 UTC m=+0.020505470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:13 np0005539508 ceph-mon[74654]: Added label _admin to host compute-0
Nov 29 01:17:13 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2360873568' entity='client.admin' 
Nov 29 01:17:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 01:17:14 np0005539508 ceph-mgr[74948]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 01:17:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:14 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 01:17:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3784537934' entity='client.admin' 
Nov 29 01:17:14 np0005539508 thirsty_williams[78209]: set mgr/dashboard/cluster/status
Nov 29 01:17:14 np0005539508 systemd[1]: libpod-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope: Deactivated successfully.
Nov 29 01:17:14 np0005539508 podman[78192]: 2025-11-29 06:17:14.188584944 +0000 UTC m=+0.743561439 container died 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:14 np0005539508 systemd[1]: var-lib-containers-storage-overlay-94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904-merged.mount: Deactivated successfully.
Nov 29 01:17:14 np0005539508 podman[78192]: 2025-11-29 06:17:14.236121117 +0000 UTC m=+0.791097592 container remove 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 01:17:14 np0005539508 systemd[1]: libpod-conmon-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope: Deactivated successfully.
Nov 29 01:17:14 np0005539508 podman[78255]: 2025-11-29 06:17:14.439917445 +0000 UTC m=+0.059716798 container create 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:14 np0005539508 systemd[1]: Started libpod-conmon-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope.
Nov 29 01:17:14 np0005539508 podman[78255]: 2025-11-29 06:17:14.420475466 +0000 UTC m=+0.040274849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 podman[78255]: 2025-11-29 06:17:14.532330006 +0000 UTC m=+0.152129349 container init 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:17:14 np0005539508 podman[78255]: 2025-11-29 06:17:14.539296443 +0000 UTC m=+0.159095786 container start 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:17:14 np0005539508 podman[78255]: 2025-11-29 06:17:14.542516694 +0000 UTC m=+0.162316037 container attach 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 01:17:14 np0005539508 python3[78301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:14 np0005539508 podman[78302]: 2025-11-29 06:17:14.894647063 +0000 UTC m=+0.038748926 container create be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:17:14 np0005539508 ceph-mon[74654]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 01:17:14 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3784537934' entity='client.admin' 
Nov 29 01:17:14 np0005539508 systemd[1]: Started libpod-conmon-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope.
Nov 29 01:17:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:14 np0005539508 podman[78302]: 2025-11-29 06:17:14.962193421 +0000 UTC m=+0.106295304 container init be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:14 np0005539508 podman[78302]: 2025-11-29 06:17:14.968134329 +0000 UTC m=+0.112236192 container start be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:14 np0005539508 podman[78302]: 2025-11-29 06:17:14.877001144 +0000 UTC m=+0.021103027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:14 np0005539508 podman[78302]: 2025-11-29 06:17:14.971258137 +0000 UTC m=+0.115360010 container attach be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2625088103' entity='client.admin' 
Nov 29 01:17:15 np0005539508 systemd[1]: libpod-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope: Deactivated successfully.
Nov 29 01:17:15 np0005539508 podman[78302]: 2025-11-29 06:17:15.578795563 +0000 UTC m=+0.722897416 container died be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:17:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3-merged.mount: Deactivated successfully.
Nov 29 01:17:15 np0005539508 podman[78302]: 2025-11-29 06:17:15.626529902 +0000 UTC m=+0.770631765 container remove be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:15 np0005539508 systemd[1]: libpod-conmon-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope: Deactivated successfully.
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]: [
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:    {
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "available": false,
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "ceph_device": false,
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "lsm_data": {},
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "lvs": [],
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "path": "/dev/sr0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "rejected_reasons": [
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "Has a FileSystem",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "Insufficient space (<5GB)"
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        ],
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        "sys_api": {
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "actuators": null,
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "device_nodes": "sr0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "devname": "sr0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "human_readable_size": "482.00 KB",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "id_bus": "ata",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "model": "QEMU DVD-ROM",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "nr_requests": "2",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "parent": "/dev/sr0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "partitions": {},
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "path": "/dev/sr0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "removable": "1",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "rev": "2.5+",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "ro": "0",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "rotational": "1",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "sas_address": "",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "sas_device_handle": "",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "scheduler_mode": "mq-deadline",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "sectors": 0,
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "sectorsize": "2048",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "size": 493568.0,
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "support_discard": "2048",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "type": "disk",
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:            "vendor": "QEMU"
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:        }
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]:    }
Nov 29 01:17:15 np0005539508 ecstatic_hodgkin[78271]: ]
Nov 29 01:17:15 np0005539508 systemd[1]: libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Deactivated successfully.
Nov 29 01:17:15 np0005539508 systemd[1]: libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Consumed 1.153s CPU time.
Nov 29 01:17:15 np0005539508 conmon[78271]: conmon 49bb05fea5177c262012 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope/container/memory.events
Nov 29 01:17:15 np0005539508 podman[78255]: 2025-11-29 06:17:15.717671507 +0000 UTC m=+1.337470850 container died 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:17:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4-merged.mount: Deactivated successfully.
Nov 29 01:17:15 np0005539508 podman[78255]: 2025-11-29 06:17:15.775980064 +0000 UTC m=+1.395779397 container remove 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:15 np0005539508 systemd[1]: libpod-conmon-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Deactivated successfully.
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:15 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:17:15 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2625088103' entity='client.admin' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:17:15 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:16 np0005539508 ansible-async_wrapper.py[79884]: Invoked with j282143511716 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397036.092439-37281-231699099922651/AnsiballZ_command.py _
Nov 29 01:17:16 np0005539508 ansible-async_wrapper.py[79945]: Starting module and watcher
Nov 29 01:17:16 np0005539508 ansible-async_wrapper.py[79945]: Start watching 79947 (30)
Nov 29 01:17:16 np0005539508 ansible-async_wrapper.py[79947]: Start module (79947)
Nov 29 01:17:16 np0005539508 ansible-async_wrapper.py[79884]: Return async_wrapper task started.
Nov 29 01:17:16 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:16 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:16 np0005539508 python3[79954]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:16 np0005539508 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.041763018 +0000 UTC m=+0.049980993 container create 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:17 np0005539508 systemd[1]: Started libpod-conmon-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope.
Nov 29 01:17:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:17 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.023549303 +0000 UTC m=+0.031767288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.127324275 +0000 UTC m=+0.135542280 container init 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.133920142 +0000 UTC m=+0.142138137 container start 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.171565565 +0000 UTC m=+0.179783560 container attach 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:17:17 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:17:17 np0005539508 competent_elion[80080]: 
Nov 29 01:17:17 np0005539508 competent_elion[80080]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 01:17:17 np0005539508 systemd[1]: libpod-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope: Deactivated successfully.
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.676686556 +0000 UTC m=+0.684904541 container died 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:17:17 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657-merged.mount: Deactivated successfully.
Nov 29 01:17:17 np0005539508 podman[80016]: 2025-11-29 06:17:17.76813068 +0000 UTC m=+0.776348655 container remove 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:17 np0005539508 ansible-async_wrapper.py[79947]: Module complete (79947)
Nov 29 01:17:17 np0005539508 systemd[1]: libpod-conmon-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope: Deactivated successfully.
Nov 29 01:17:18 np0005539508 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:18 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:18 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:18 np0005539508 python3[80589]: ansible-ansible.legacy.async_status Invoked with jid=j282143511716.79884 mode=status _async_dir=/root/.ansible_async
Nov 29 01:17:18 np0005539508 python3[80758]: ansible-ansible.legacy.async_status Invoked with jid=j282143511716.79884 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 01:17:19 np0005539508 python3[80941]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:17:19 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:19 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:20 np0005539508 python3[81151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:20 np0005539508 podman[81211]: 2025-11-29 06:17:20.358005834 +0000 UTC m=+0.103731699 container create 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:20 np0005539508 podman[81211]: 2025-11-29 06:17:20.300436257 +0000 UTC m=+0.046162122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:20 np0005539508 systemd[1]: Started libpod-conmon-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope.
Nov 29 01:17:20 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:20 np0005539508 podman[81211]: 2025-11-29 06:17:20.447564466 +0000 UTC m=+0.193290421 container init 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:17:20 np0005539508 podman[81211]: 2025-11-29 06:17:20.457905761 +0000 UTC m=+0.203631626 container start 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:20 np0005539508 podman[81211]: 2025-11-29 06:17:20.461754732 +0000 UTC m=+0.207480607 container attach 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:17:20 np0005539508 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:20 np0005539508 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:21 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:17:21 np0005539508 peaceful_ardinghelli[81283]: 
Nov 29 01:17:21 np0005539508 peaceful_ardinghelli[81283]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 01:17:21 np0005539508 systemd[1]: libpod-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope: Deactivated successfully.
Nov 29 01:17:21 np0005539508 podman[81211]: 2025-11-29 06:17:21.028265737 +0000 UTC m=+0.773991642 container died 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:21 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99-merged.mount: Deactivated successfully.
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:21 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1))
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:17:21 np0005539508 podman[81211]: 2025-11-29 06:17:21.08533578 +0000 UTC m=+0.831061685 container remove 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:21 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 01:17:21 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 01:17:21 np0005539508 systemd[1]: libpod-conmon-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope: Deactivated successfully.
Nov 29 01:17:21 np0005539508 python3[81643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:21 np0005539508 podman[81663]: 2025-11-29 06:17:21.716395983 +0000 UTC m=+0.062062106 container create 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:21 np0005539508 systemd[1]: Started libpod-conmon-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope.
Nov 29 01:17:21 np0005539508 podman[81663]: 2025-11-29 06:17:21.692943103 +0000 UTC m=+0.038609276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:21 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.806655056 +0000 UTC m=+0.056380754 container create 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:21 np0005539508 ansible-async_wrapper.py[79945]: Done in kid B.
Nov 29 01:17:21 np0005539508 podman[81663]: 2025-11-29 06:17:21.828220753 +0000 UTC m=+0.173886886 container init 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:17:21 np0005539508 podman[81663]: 2025-11-29 06:17:21.835278124 +0000 UTC m=+0.180944247 container start 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:21 np0005539508 podman[81663]: 2025-11-29 06:17:21.838911118 +0000 UTC m=+0.184577281 container attach 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:17:21 np0005539508 systemd[1]: Started libpod-conmon-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope.
Nov 29 01:17:21 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.779622372 +0000 UTC m=+0.029348140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.882110314 +0000 UTC m=+0.131836032 container init 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.89174093 +0000 UTC m=+0.141466618 container start 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:21 np0005539508 zealous_shamir[81722]: 167 167
Nov 29 01:17:21 np0005539508 systemd[1]: libpod-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope: Deactivated successfully.
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.89630853 +0000 UTC m=+0.146034218 container attach 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.897128004 +0000 UTC m=+0.146853692 container died 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:21 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3b23363f840d0a3b785f08c140075b551350cfed1ab8029ac4c887457a67af4e-merged.mount: Deactivated successfully.
Nov 29 01:17:21 np0005539508 podman[81704]: 2025-11-29 06:17:21.935724808 +0000 UTC m=+0.185450496 container remove 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:17:21 np0005539508 systemd[1]: libpod-conmon-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope: Deactivated successfully.
Nov 29 01:17:21 np0005539508 systemd[1]: Reloading.
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: Deploying daemon crash.compute-0 on compute-0
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:22 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:17:22 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:17:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:22 np0005539508 systemd[1]: Reloading.
Nov 29 01:17:22 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:17:22 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 01:17:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/819257723' entity='client.admin' 
Nov 29 01:17:22 np0005539508 podman[81838]: 2025-11-29 06:17:22.483135858 +0000 UTC m=+0.027357433 container died 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:22 np0005539508 systemd[1]: libpod-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope: Deactivated successfully.
Nov 29 01:17:22 np0005539508 systemd[1]: var-lib-containers-storage-overlay-108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504-merged.mount: Deactivated successfully.
Nov 29 01:17:22 np0005539508 podman[81838]: 2025-11-29 06:17:22.582093179 +0000 UTC m=+0.126314754 container remove 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:22 np0005539508 systemd[1]: Starting Ceph crash.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:17:22 np0005539508 systemd[1]: libpod-conmon-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope: Deactivated successfully.
Nov 29 01:17:22 np0005539508 python3[81918]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:22 np0005539508 podman[81929]: 2025-11-29 06:17:22.932937886 +0000 UTC m=+0.074718578 container create 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:17:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:22 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:22 np0005539508 podman[81929]: 2025-11-29 06:17:22.90057262 +0000 UTC m=+0.042353312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:23 np0005539508 podman[81929]: 2025-11-29 06:17:23.009512027 +0000 UTC m=+0.151292699 container init 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:23 np0005539508 podman[81929]: 2025-11-29 06:17:23.018497694 +0000 UTC m=+0.160278346 container start 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:17:23 np0005539508 bash[81929]: 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.027771839 +0000 UTC m=+0.070523188 container create 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:23 np0005539508 systemd[1]: Started Ceph crash.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:23 np0005539508 systemd[1]: Started libpod-conmon-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope.
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.006087809 +0000 UTC m=+0.048839188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1))
Nov 29 01:17:23 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 01:17:23 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:17:23 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:23 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:23 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 33c939e7-5213-46e1-a759-288e8057c6b0 does not exist
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.133755301 +0000 UTC m=+0.176506680 container init 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b85fcc90-c81a-44bc-a870-abe338067d16 does not exist
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.147408562 +0000 UTC m=+0.190159941 container start 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.151099938 +0000 UTC m=+0.193851327 container attach 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.427+0000 7f9014f1f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.427+0000 7f9014f1f640 -1 AuthRegistry(0x7f9010066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.429+0000 7f9014f1f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.429+0000 7f9014f1f640 -1 AuthRegistry(0x7f9014f1e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/819257723' entity='client.admin' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.430+0000 7f900e575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.430+0000 7f9014f1f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 01:17:23 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 01:17:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1374985863' entity='client.admin' 
Nov 29 01:17:23 np0005539508 systemd[1]: libpod-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope: Deactivated successfully.
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.695715968 +0000 UTC m=+0.738467347 container died 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:17:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e-merged.mount: Deactivated successfully.
Nov 29 01:17:23 np0005539508 podman[81942]: 2025-11-29 06:17:23.765214476 +0000 UTC m=+0.807965855 container remove 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:17:23 np0005539508 systemd[1]: libpod-conmon-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope: Deactivated successfully.
Nov 29 01:17:24 np0005539508 python3[82245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 1 completed events
Nov 29 01:17:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:25 np0005539508 podman[82260]: 2025-11-29 06:17:25.013073082 +0000 UTC m=+0.943095888 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:25 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1374985863' entity='client.admin' 
Nov 29 01:17:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:25 np0005539508 podman[82291]: 2025-11-29 06:17:25.192806455 +0000 UTC m=+0.050910248 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:17:25 np0005539508 podman[82260]: 2025-11-29 06:17:25.303652296 +0000 UTC m=+1.233675082 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:17:25 np0005539508 podman[82274]: 2025-11-29 06:17:25.739627389 +0000 UTC m=+1.559071852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:25 np0005539508 podman[82274]: 2025-11-29 06:17:25.949624457 +0000 UTC m=+1.769068860 container create 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:17:26 np0005539508 systemd[1]: Started libpod-conmon-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope.
Nov 29 01:17:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:26 np0005539508 podman[82274]: 2025-11-29 06:17:26.0933926 +0000 UTC m=+1.912836993 container init 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:17:26 np0005539508 podman[82274]: 2025-11-29 06:17:26.104370224 +0000 UTC m=+1.923814617 container start 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:17:26 np0005539508 podman[82274]: 2025-11-29 06:17:26.109089019 +0000 UTC m=+1.928533402 container attach 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 2a90c3e7-96d8-42a4-8b91-497db68b192f does not exist
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7d20de59-577a-4895-871d-4672919657d5 does not exist
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b51b2c4f-1bf1-49c1-982c-e7a6536a294c does not exist
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:17:26 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 01:17:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 01:17:26 np0005539508 podman[82534]: 2025-11-29 06:17:26.873402824 +0000 UTC m=+0.046787259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:26 np0005539508 podman[82534]: 2025-11-29 06:17:26.991727549 +0000 UTC m=+0.165111904 container create 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:27 np0005539508 systemd[1]: Started libpod-conmon-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope.
Nov 29 01:17:27 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:27 np0005539508 podman[82534]: 2025-11-29 06:17:27.157404549 +0000 UTC m=+0.330788974 container init 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:17:27 np0005539508 podman[82534]: 2025-11-29 06:17:27.167803737 +0000 UTC m=+0.341188062 container start 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:27 np0005539508 podman[82534]: 2025-11-29 06:17:27.171288366 +0000 UTC m=+0.344672721 container attach 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:17:27 np0005539508 peaceful_colden[82550]: 167 167
Nov 29 01:17:27 np0005539508 systemd[1]: libpod-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope: Deactivated successfully.
Nov 29 01:17:27 np0005539508 conmon[82550]: conmon 0ddf0084db5c8cb47d2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope/container/memory.events
Nov 29 01:17:27 np0005539508 podman[82534]: 2025-11-29 06:17:27.17667642 +0000 UTC m=+0.350060745 container died 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 01:17:27 np0005539508 systemd[1]: var-lib-containers-storage-overlay-64c37785d6cba884bc91263eafba1a2b1d284a11aed8fa68b9cd17bd90f32406-merged.mount: Deactivated successfully.
Nov 29 01:17:27 np0005539508 podman[82534]: 2025-11-29 06:17:27.322945465 +0000 UTC m=+0.496329830 container remove 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:17:27 np0005539508 systemd[1]: libpod-conmon-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope: Deactivated successfully.
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 01:17:27 np0005539508 vigorous_ritchie[82328]: set require_min_compat_client to mimic
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 01:17:27 np0005539508 systemd[1]: libpod-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope: Deactivated successfully.
Nov 29 01:17:27 np0005539508 podman[82274]: 2025-11-29 06:17:27.49825115 +0000 UTC m=+3.317695553 container died 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:27 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 01:17:27 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:27 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:17:27 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:17:27 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9-merged.mount: Deactivated successfully.
Nov 29 01:17:27 np0005539508 podman[82274]: 2025-11-29 06:17:27.676947262 +0000 UTC m=+3.496391665 container remove 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:17:27 np0005539508 systemd[1]: libpod-conmon-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope: Deactivated successfully.
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.070223103 +0000 UTC m=+0.028278640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:17:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.283080162 +0000 UTC m=+0.241135689 container create 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:17:28 np0005539508 python3[82739]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:28 np0005539508 podman[82740]: 2025-11-29 06:17:28.478197923 +0000 UTC m=+0.071838346 container create 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:17:28 np0005539508 systemd[1]: Started libpod-conmon-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope.
Nov 29 01:17:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:28 np0005539508 systemd[1]: Started libpod-conmon-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope.
Nov 29 01:17:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.516817148 +0000 UTC m=+0.474872645 container init 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:17:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.527383791 +0000 UTC m=+0.485439278 container start 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.531165759 +0000 UTC m=+0.489221246 container attach 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:17:28 np0005539508 gifted_boyd[82755]: 167 167
Nov 29 01:17:28 np0005539508 systemd[1]: libpod-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope: Deactivated successfully.
Nov 29 01:17:28 np0005539508 podman[82740]: 2025-11-29 06:17:28.537595463 +0000 UTC m=+0.131235876 container init 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.538239341 +0000 UTC m=+0.496294838 container died 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:28 np0005539508 podman[82740]: 2025-11-29 06:17:28.544553582 +0000 UTC m=+0.138193975 container start 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:17:28 np0005539508 podman[82740]: 2025-11-29 06:17:28.548536136 +0000 UTC m=+0.142176539 container attach 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:28 np0005539508 podman[82740]: 2025-11-29 06:17:28.459527699 +0000 UTC m=+0.053168132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:28 np0005539508 systemd[1]: var-lib-containers-storage-overlay-726a5ca468b5929d0b9b3132c401666fd51895821a89d3b1046dfb810e9cb90b-merged.mount: Deactivated successfully.
Nov 29 01:17:28 np0005539508 podman[82700]: 2025-11-29 06:17:28.585263326 +0000 UTC m=+0.543318823 container remove 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:28 np0005539508 systemd[1]: libpod-conmon-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope: Deactivated successfully.
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:17:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:28 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev de7708f6-109f-4bda-9c2f-6a3e09336563 does not exist
Nov 29 01:17:28 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 702a2450-fbe1-48bb-9dff-cf9969313aac does not exist
Nov 29 01:17:28 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 3e257dda-5644-4265-b1e3-a3f205625295 does not exist
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Added host compute-0
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:17:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5e86a151-7046-48f1-af39-40904819a436 does not exist
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev d4f47224-617f-4fcc-b3e6-e8c67fd29605 does not exist
Nov 29 01:17:29 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 48f46211-437f-4c2a-a39a-e7a00041c86b does not exist
Nov 29 01:17:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:31 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 29 01:17:31 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 29 01:17:31 np0005539508 ceph-mon[74654]: Added host compute-0
Nov 29 01:17:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:32 np0005539508 ceph-mon[74654]: Deploying cephadm binary to compute-1
Nov 29 01:17:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:35 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Added host compute-1
Nov 29 01:17:35 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: Added host compute-1
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:36 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 29 01:17:36 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 29 01:17:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:37 np0005539508 ceph-mon[74654]: Deploying cephadm binary to compute-2
Nov 29 01:17:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:17:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Added host compute-2
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 01:17:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Added host 'compute-1' with addr '192.168.122.101'
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Added host 'compute-2' with addr '192.168.122.102'
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Scheduled mon update...
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Scheduled mgr update...
Nov 29 01:17:41 np0005539508 brave_johnson[82760]: Scheduled osd.default_drive_group update...
Nov 29 01:17:41 np0005539508 systemd[1]: libpod-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope: Deactivated successfully.
Nov 29 01:17:41 np0005539508 podman[82740]: 2025-11-29 06:17:41.060564298 +0000 UTC m=+12.654204711 container died 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 01:17:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:41 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b-merged.mount: Deactivated successfully.
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:42 np0005539508 podman[82740]: 2025-11-29 06:17:42.088201766 +0000 UTC m=+13.681842209 container remove 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:17:42 np0005539508 systemd[1]: libpod-conmon-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope: Deactivated successfully.
Nov 29 01:17:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Added host compute-2
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 01:17:42 np0005539508 ceph-mon[74654]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 01:17:42 np0005539508 python3[83057]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:42 np0005539508 podman[83059]: 2025-11-29 06:17:42.635164802 +0000 UTC m=+0.043825995 container create 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:17:42 np0005539508 systemd[1]: Started libpod-conmon-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope.
Nov 29 01:17:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:17:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:17:42 np0005539508 podman[83059]: 2025-11-29 06:17:42.616844078 +0000 UTC m=+0.025505361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:17:42 np0005539508 podman[83059]: 2025-11-29 06:17:42.721729208 +0000 UTC m=+0.130390431 container init 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 01:17:42 np0005539508 podman[83059]: 2025-11-29 06:17:42.73228119 +0000 UTC m=+0.140942383 container start 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:17:42 np0005539508 podman[83059]: 2025-11-29 06:17:42.735790831 +0000 UTC m=+0.144452024 container attach 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:17:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 01:17:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178888563' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 01:17:43 np0005539508 vigilant_mendeleev[83075]: 
Nov 29 01:17:43 np0005539508 vigilant_mendeleev[83075]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":96,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T06:16:03.952029+0000","services":{}},"progress_events":{}}
Nov 29 01:17:43 np0005539508 systemd[1]: libpod-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope: Deactivated successfully.
Nov 29 01:17:43 np0005539508 podman[83059]: 2025-11-29 06:17:43.363921921 +0000 UTC m=+0.772583124 container died 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 01:17:43 np0005539508 systemd[1]: var-lib-containers-storage-overlay-93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3-merged.mount: Deactivated successfully.
Nov 29 01:17:43 np0005539508 podman[83059]: 2025-11-29 06:17:43.423435174 +0000 UTC m=+0.832096367 container remove 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:17:43 np0005539508 systemd[1]: libpod-conmon-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope: Deactivated successfully.
Nov 29 01:17:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:47 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:17:47 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:17:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:17:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:17:48 np0005539508 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:17:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:48 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:48 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:49 np0005539508 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:17:49 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:49 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:50 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:50 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:17:51.919+0000 7f90e34d8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2))
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: service_name: mon
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: placement:
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  hosts:
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-0
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-1
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-2
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:17:51.920+0000 7f90e34d8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: service_name: mgr
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: placement:
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  hosts:
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-0
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-1
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:  - compute-2
Nov 29 01:17:51 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:17:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 29 01:17:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 01:17:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
Nov 29 01:17:53 np0005539508 ceph-mon[74654]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 01:17:53 np0005539508 ceph-mon[74654]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 01:17:53 np0005539508 ceph-mon[74654]: Deploying daemon crash.compute-1 on compute-1
Nov 29 01:17:53 np0005539508 ceph-mon[74654]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 01:17:53 np0005539508 ceph-mon[74654]: Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
Nov 29 01:17:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:17:54
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:17:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:17:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:17:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:17:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:13 np0005539508 python3[83145]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:13 np0005539508 podman[83147]: 2025-11-29 06:18:13.807445403 +0000 UTC m=+0.030228245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:13 np0005539508 podman[83147]: 2025-11-29 06:18:13.945802381 +0000 UTC m=+0.168585233 container create 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:14 np0005539508 systemd[1]: Started libpod-conmon-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope.
Nov 29 01:18:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:14 np0005539508 podman[83147]: 2025-11-29 06:18:14.130507985 +0000 UTC m=+0.353290867 container init 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:14 np0005539508 podman[83147]: 2025-11-29 06:18:14.141986994 +0000 UTC m=+0.364769846 container start 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:14 np0005539508 podman[83147]: 2025-11-29 06:18:14.186774265 +0000 UTC m=+0.409557167 container attach 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:18:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 01:18:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430457078' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 01:18:14 np0005539508 brave_franklin[83163]: 
Nov 29 01:18:14 np0005539508 brave_franklin[83163]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"CEPHADM_REFRESH_FAILED":{"severity":"HEALTH_WARN","summary":{"message":"failed to probe daemons or devices","count":1},"muted":false},"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{"f16fc35c-a5e4-431b-90d1-3bb309788cfc":{"message":"Updating crash deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 29 01:18:14 np0005539508 systemd[1]: libpod-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope: Deactivated successfully.
Nov 29 01:18:14 np0005539508 podman[83147]: 2025-11-29 06:18:14.772950293 +0000 UTC m=+0.995733155 container died 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:18:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c-merged.mount: Deactivated successfully.
Nov 29 01:18:15 np0005539508 podman[83147]: 2025-11-29 06:18:15.131753628 +0000 UTC m=+1.354536440 container remove 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:15 np0005539508 systemd[1]: libpod-conmon-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope: Deactivated successfully.
Nov 29 01:18:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:18 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2))
Nov 29 01:18:18 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2)) in 27 seconds
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:18:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.225661456 +0000 UTC m=+0.046521742 container create 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 01:18:19 np0005539508 systemd[1]: Started libpod-conmon-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope.
Nov 29 01:18:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.302337149 +0000 UTC m=+0.123197485 container init 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.207614499 +0000 UTC m=+0.028474795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.308405513 +0000 UTC m=+0.129265789 container start 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:19 np0005539508 nostalgic_franklin[83361]: 167 167
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.313749886 +0000 UTC m=+0.134610192 container attach 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:18:19 np0005539508 systemd[1]: libpod-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope: Deactivated successfully.
Nov 29 01:18:19 np0005539508 conmon[83361]: conmon 1a984c0485914b571ea9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope/container/memory.events
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.315822465 +0000 UTC m=+0.136682771 container died 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 01:18:19 np0005539508 systemd[1]: var-lib-containers-storage-overlay-258edc8726749ce9b7d7f47169eceb443e9c39dfdeaf3300311f8b586fd373a8-merged.mount: Deactivated successfully.
Nov 29 01:18:19 np0005539508 podman[83345]: 2025-11-29 06:18:19.363232001 +0000 UTC m=+0.184092278 container remove 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:18:19 np0005539508 systemd[1]: libpod-conmon-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope: Deactivated successfully.
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:18:19 np0005539508 podman[83384]: 2025-11-29 06:18:19.576837272 +0000 UTC m=+0.058216026 container create e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:19 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 2 completed events
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:18:19 np0005539508 systemd[1]: Started libpod-conmon-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope.
Nov 29 01:18:19 np0005539508 podman[83384]: 2025-11-29 06:18:19.554277277 +0000 UTC m=+0.035656021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:19 np0005539508 podman[83384]: 2025-11-29 06:18:19.941028171 +0000 UTC m=+0.422406935 container init e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:18:19 np0005539508 podman[83384]: 2025-11-29 06:18:19.958206583 +0000 UTC m=+0.439585297 container start e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:18:19 np0005539508 podman[83384]: 2025-11-29 06:18:19.96195738 +0000 UTC m=+0.443336154 container attach e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:18:20 np0005539508 bold_agnesi[83401]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:18:20 np0005539508 bold_agnesi[83401]: --> relative data size: 1.0
Nov 29 01:18:20 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 01:18:20 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 91f280f1-e534-4adc-bf70-98711580c2dd
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"} v 0) v1
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]: dispatch
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]': finished
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:20 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]: dispatch
Nov 29 01:18:20 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]': finished
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"} v 0) v1
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]: dispatch
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]': finished
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:21 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:21 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 01:18:21 np0005539508 lvm[83448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:18:21 np0005539508 lvm[83448]: VG ceph_vg0 finished
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4241004139' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020978526' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: stderr: got monmap epoch 1
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: --> Creating keyring file for osd.1
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 01:18:21 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 91f280f1-e534-4adc-bf70-98711580c2dd --setuser ceph --setgroup ceph
Nov 29 01:18:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]: dispatch
Nov 29 01:18:21 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]': finished
Nov 29 01:18:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:22 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 01:18:22 np0005539508 ceph-mon[74654]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 01:18:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 01:18:24 np0005539508 bold_agnesi[83401]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 01:18:24 np0005539508 systemd[1]: libpod-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Deactivated successfully.
Nov 29 01:18:24 np0005539508 podman[83384]: 2025-11-29 06:18:24.509352573 +0000 UTC m=+4.990731297 container died e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:24 np0005539508 systemd[1]: libpod-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Consumed 2.643s CPU time.
Nov 29 01:18:24 np0005539508 systemd[1]: var-lib-containers-storage-overlay-83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72-merged.mount: Deactivated successfully.
Nov 29 01:18:24 np0005539508 podman[83384]: 2025-11-29 06:18:24.571755558 +0000 UTC m=+5.053134282 container remove e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:24 np0005539508 systemd[1]: libpod-conmon-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Deactivated successfully.
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.308804172 +0000 UTC m=+0.046512084 container create 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:25 np0005539508 systemd[1]: Started libpod-conmon-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope.
Nov 29 01:18:25 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.28518993 +0000 UTC m=+0.022897882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.398809322 +0000 UTC m=+0.136517274 container init 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.408076476 +0000 UTC m=+0.145784428 container start 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.412197803 +0000 UTC m=+0.149905755 container attach 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:25 np0005539508 zen_sutherland[84526]: 167 167
Nov 29 01:18:25 np0005539508 systemd[1]: libpod-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope: Deactivated successfully.
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.416599338 +0000 UTC m=+0.154307290 container died 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 01:18:25 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3ad404d718baa356979e3155b41b328385fe80f6fadc23fd0c21d915b8252e81-merged.mount: Deactivated successfully.
Nov 29 01:18:25 np0005539508 podman[84510]: 2025-11-29 06:18:25.473888326 +0000 UTC m=+0.211596238 container remove 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:25 np0005539508 systemd[1]: libpod-conmon-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope: Deactivated successfully.
Nov 29 01:18:25 np0005539508 podman[84550]: 2025-11-29 06:18:25.661657967 +0000 UTC m=+0.042625853 container create bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:18:25 np0005539508 systemd[1]: Started libpod-conmon-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope.
Nov 29 01:18:25 np0005539508 podman[84550]: 2025-11-29 06:18:25.641909365 +0000 UTC m=+0.022877251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:25 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:25 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:25 np0005539508 podman[84550]: 2025-11-29 06:18:25.774183038 +0000 UTC m=+0.155150984 container init bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:25 np0005539508 podman[84550]: 2025-11-29 06:18:25.78691396 +0000 UTC m=+0.167881846 container start bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:25 np0005539508 podman[84550]: 2025-11-29 06:18:25.791311425 +0000 UTC m=+0.172279321 container attach bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:18:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]: {
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:    "1": [
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:        {
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "devices": [
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "/dev/loop3"
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            ],
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "lv_name": "ceph_lv0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "lv_size": "7511998464",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "name": "ceph_lv0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "tags": {
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.cluster_name": "ceph",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.crush_device_class": "",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.encrypted": "0",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.osd_id": "1",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.type": "block",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:                "ceph.vdo": "0"
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            },
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "type": "block",
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:            "vg_name": "ceph_vg0"
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:        }
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]:    ]
Nov 29 01:18:26 np0005539508 relaxed_turing[84566]: }
Nov 29 01:18:26 np0005539508 systemd[1]: libpod-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope: Deactivated successfully.
Nov 29 01:18:26 np0005539508 podman[84550]: 2025-11-29 06:18:26.618589575 +0000 UTC m=+0.999557441 container died bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9-merged.mount: Deactivated successfully.
Nov 29 01:18:26 np0005539508 podman[84550]: 2025-11-29 06:18:26.678061047 +0000 UTC m=+1.059028893 container remove bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:26 np0005539508 systemd[1]: libpod-conmon-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope: Deactivated successfully.
Nov 29 01:18:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 01:18:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 01:18:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:18:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:18:26 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 01:18:26 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 01:18:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 01:18:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.446630947 +0000 UTC m=+0.072125212 container create 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:18:27 np0005539508 systemd[1]: Started libpod-conmon-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope.
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.415187123 +0000 UTC m=+0.040681468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:27 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.540185868 +0000 UTC m=+0.165680183 container init 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.547440095 +0000 UTC m=+0.172934380 container start 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.551565272 +0000 UTC m=+0.177059577 container attach 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:27 np0005539508 jovial_torvalds[84744]: 167 167
Nov 29 01:18:27 np0005539508 systemd[1]: libpod-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope: Deactivated successfully.
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.553013883 +0000 UTC m=+0.178508158 container died 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 01:18:27 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c7d6257364e1f30e3a821a05f81b3b8ef2e963560be85759ffa2e8a2f758d24f-merged.mount: Deactivated successfully.
Nov 29 01:18:27 np0005539508 podman[84727]: 2025-11-29 06:18:27.601503372 +0000 UTC m=+0.226997667 container remove 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 01:18:27 np0005539508 systemd[1]: libpod-conmon-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope: Deactivated successfully.
Nov 29 01:18:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:27 np0005539508 podman[84775]: 2025-11-29 06:18:27.967561124 +0000 UTC m=+0.071995308 container create 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:18:27 np0005539508 ceph-mon[74654]: Deploying daemon osd.1 on compute-0
Nov 29 01:18:28 np0005539508 systemd[1]: Started libpod-conmon-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope.
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:27.939225148 +0000 UTC m=+0.043659422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:28.06865312 +0000 UTC m=+0.173087334 container init 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:28.080745524 +0000 UTC m=+0.185179728 container start 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:28.086297142 +0000 UTC m=+0.190731356 container attach 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:18:28 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 01:18:28 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]:                            [--no-systemd] [--no-tmpfs]
Nov 29 01:18:28 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 01:18:28 np0005539508 systemd[1]: libpod-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope: Deactivated successfully.
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:28.771438559 +0000 UTC m=+0.875872743 container died 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 01:18:28 np0005539508 systemd[1]: var-lib-containers-storage-overlay-41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064-merged.mount: Deactivated successfully.
Nov 29 01:18:28 np0005539508 podman[84775]: 2025-11-29 06:18:28.832064284 +0000 UTC m=+0.936498458 container remove 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:18:28 np0005539508 systemd[1]: libpod-conmon-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope: Deactivated successfully.
Nov 29 01:18:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 01:18:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 01:18:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:18:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:18:29 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Nov 29 01:18:29 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Nov 29 01:18:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:30 np0005539508 systemd[1]: Reloading.
Nov 29 01:18:30 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:18:30 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:18:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 01:18:30 np0005539508 ceph-mon[74654]: Deploying daemon osd.0 on compute-1
Nov 29 01:18:30 np0005539508 systemd[1]: Reloading.
Nov 29 01:18:31 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:18:31 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:18:31 np0005539508 systemd[1]: Starting Ceph osd.1 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:18:31 np0005539508 podman[84951]: 2025-11-29 06:18:31.497119036 +0000 UTC m=+0.037015774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:31 np0005539508 podman[84951]: 2025-11-29 06:18:31.639253588 +0000 UTC m=+0.179150266 container create f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:31 np0005539508 podman[84951]: 2025-11-29 06:18:31.736291338 +0000 UTC m=+0.276188006 container init f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:31 np0005539508 podman[84951]: 2025-11-29 06:18:31.746508849 +0000 UTC m=+0.286405497 container start f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 01:18:31 np0005539508 podman[84951]: 2025-11-29 06:18:31.749905986 +0000 UTC m=+0.289802634 container attach f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:18:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:32 np0005539508 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 01:18:32 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 01:18:32 np0005539508 bash[84951]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 01:18:32 np0005539508 systemd[1]: libpod-f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea.scope: Deactivated successfully.
Nov 29 01:18:32 np0005539508 podman[84951]: 2025-11-29 06:18:32.673421822 +0000 UTC m=+1.213318510 container died f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:32 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64-merged.mount: Deactivated successfully.
Nov 29 01:18:32 np0005539508 podman[84951]: 2025-11-29 06:18:32.756265478 +0000 UTC m=+1.296162166 container remove f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:18:33 np0005539508 podman[85143]: 2025-11-29 06:18:33.006523107 +0000 UTC m=+0.049323024 container create aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:18:33 np0005539508 podman[85143]: 2025-11-29 06:18:32.986070375 +0000 UTC m=+0.028870272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:33 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:33 np0005539508 podman[85143]: 2025-11-29 06:18:33.315066152 +0000 UTC m=+0.357866129 container init aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:18:33 np0005539508 podman[85143]: 2025-11-29 06:18:33.324947913 +0000 UTC m=+0.367747870 container start aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: pidfile_write: ignore empty --pid-file
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 01:18:33 np0005539508 bash[85143]: aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04
Nov 29 01:18:33 np0005539508 systemd[1]: Started Ceph osd.1 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 01:18:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:18:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:18:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 01:18:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: load: jerasure load: lrc 
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 01:18:33 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs mount
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs mount shared_bdev_used = 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: RocksDB version: 7.9.2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Git sha 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DB SUMMARY
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DB Session ID:  2QR1MYHZ2PW1Z4CTUV0E
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: CURRENT file:  CURRENT
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.error_if_exists: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.create_if_missing: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                     Options.env: 0x5633f0983c70
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                Options.info_log: 0x5633efb76ba0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.statistics: (nil)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.use_fsync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.db_log_dir: 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.write_buffer_manager: 0x5633f0a86460
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.unordered_write: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.row_cache: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.wal_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.two_write_queues: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.wal_compression: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.atomic_flush: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_background_jobs: 4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_background_compactions: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_subcompactions: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.max_open_files: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Compression algorithms supported:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZSTD supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kXpressCompression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZlibCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6c430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfb129aa-a58b-42ea-bfc0-d0183185d57f
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114513287, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114513504, "job": 1, "event": "recovery_finished"}
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: freelist init
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: freelist _read_cfg
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs umount
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 01:18:34 np0005539508 podman[85522]: 2025-11-29 06:18:34.606498025 +0000 UTC m=+0.029003436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs mount
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluefs mount shared_bdev_used = 4718592
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: RocksDB version: 7.9.2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Git sha 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DB SUMMARY
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DB Session ID:  2QR1MYHZ2PW1Z4CTUV0F
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: CURRENT file:  CURRENT
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.error_if_exists: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.create_if_missing: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                     Options.env: 0x5633efbb8700
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                Options.info_log: 0x5633efb77860
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.statistics: (nil)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.use_fsync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.db_log_dir: 
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.write_buffer_manager: 0x5633f0a86960
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.unordered_write: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.row_cache: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                              Options.wal_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.two_write_queues: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.wal_compression: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.atomic_flush: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_background_jobs: 4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_background_compactions: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_subcompactions: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.max_open_files: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Compression algorithms supported:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZSTD supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kXpressCompression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kZlibCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5633efb6d770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfb129aa-a58b-42ea-bfc0-d0183185d57f
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114790726, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 01:18:34 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 01:18:34 np0005539508 podman[85522]: 2025-11-29 06:18:34.825109513 +0000 UTC m=+0.247614874 container create b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115009622, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397114, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:18:35 np0005539508 systemd[1]: Started libpod-conmon-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope.
Nov 29 01:18:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115304030, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397115, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:18:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:35 np0005539508 podman[85522]: 2025-11-29 06:18:35.532079641 +0000 UTC m=+0.954585052 container init b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115532383, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397115, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115534133, "job": 1, "event": "recovery_finished"}
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 01:18:35 np0005539508 podman[85522]: 2025-11-29 06:18:35.543527547 +0000 UTC m=+0.966032888 container start b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:35 np0005539508 naughty_edison[85721]: 167 167
Nov 29 01:18:35 np0005539508 systemd[1]: libpod-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope: Deactivated successfully.
Nov 29 01:18:35 np0005539508 podman[85522]: 2025-11-29 06:18:35.557798313 +0000 UTC m=+0.980303684 container attach b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:35 np0005539508 podman[85522]: 2025-11-29 06:18:35.558678598 +0000 UTC m=+0.981183969 container died b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5633efc3fc00
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: DB pointer 0x5633f0a6fa00
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.8 total, 0.8 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.8 total, 0.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: _get_class not permitted to load lua
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: _get_class not permitted to load sdk
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: _get_class not permitted to load test_remote_reads
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 load_pgs
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 load_pgs opened 0 pgs
Nov 29 01:18:35 np0005539508 ceph-osd[85162]: osd.1 0 log_to_monitors true
Nov 29 01:18:35 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:35.582+0000 7f4f3ca3a740 -1 osd.1 0 log_to_monitors true
Nov 29 01:18:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 01:18:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 01:18:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-92a2d3340cbc862a541bed5b97f4c12f115a85789cdb94d0e4d810d21bad5ac9-merged.mount: Deactivated successfully.
Nov 29 01:18:35 np0005539508 podman[85522]: 2025-11-29 06:18:35.626790395 +0000 UTC m=+1.049295746 container remove b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:18:35 np0005539508 systemd[1]: libpod-conmon-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope: Deactivated successfully.
Nov 29 01:18:35 np0005539508 podman[85777]: 2025-11-29 06:18:35.829771268 +0000 UTC m=+0.068808328 container create 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:35 np0005539508 systemd[1]: Started libpod-conmon-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope.
Nov 29 01:18:35 np0005539508 podman[85777]: 2025-11-29 06:18:35.804399837 +0000 UTC m=+0.043436937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:35 np0005539508 podman[85777]: 2025-11-29 06:18:35.930771061 +0000 UTC m=+0.169808101 container init 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:35 np0005539508 podman[85777]: 2025-11-29 06:18:35.943815542 +0000 UTC m=+0.182852612 container start 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:35 np0005539508 podman[85777]: 2025-11-29 06:18:35.948821135 +0000 UTC m=+0.187858255 container attach 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0068000000000000005 at location {host=compute-0,root=default}
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068000000000000005 at location {host=compute-1,root=default}
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:36 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:36 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:36 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 01:18:36 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 01:18:36 np0005539508 competent_golick[85794]: {
Nov 29 01:18:36 np0005539508 competent_golick[85794]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:18:36 np0005539508 competent_golick[85794]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:18:36 np0005539508 competent_golick[85794]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:18:36 np0005539508 competent_golick[85794]:        "osd_id": 1,
Nov 29 01:18:36 np0005539508 competent_golick[85794]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:18:36 np0005539508 competent_golick[85794]:        "type": "bluestore"
Nov 29 01:18:36 np0005539508 competent_golick[85794]:    }
Nov 29 01:18:36 np0005539508 competent_golick[85794]: }
Nov 29 01:18:36 np0005539508 systemd[1]: libpod-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope: Deactivated successfully.
Nov 29 01:18:36 np0005539508 conmon[85794]: conmon 30a006593d7f5630461b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope/container/memory.events
Nov 29 01:18:36 np0005539508 podman[85777]: 2025-11-29 06:18:36.856142031 +0000 UTC m=+1.095179091 container died 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:18:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed-merged.mount: Deactivated successfully.
Nov 29 01:18:36 np0005539508 podman[85777]: 2025-11-29 06:18:36.921077688 +0000 UTC m=+1.160114758 container remove 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:18:36 np0005539508 systemd[1]: libpod-conmon-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope: Deactivated successfully.
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:18:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 done with init, starting boot process
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 start_boot
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 01:18:37 np0005539508 ceph-osd[85162]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:38 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:38 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:38 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:38 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 01:18:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:38 np0005539508 podman[86047]: 2025-11-29 06:18:38.875327343 +0000 UTC m=+0.081176920 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:18:38 np0005539508 podman[86047]: 2025-11-29 06:18:38.98737673 +0000 UTC m=+0.193226227 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:39 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:39 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:39 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:18:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:40 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:40 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:40 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:40 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:18:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:41 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:41 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:41 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:41 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.645189075 +0000 UTC m=+0.094822608 container create 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.590632723 +0000 UTC m=+0.040266316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:41 np0005539508 systemd[1]: Started libpod-conmon-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope.
Nov 29 01:18:41 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.748127462 +0000 UTC m=+0.197761015 container init 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.802231431 +0000 UTC m=+0.251864984 container start 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:41 np0005539508 goofy_lehmann[86415]: 167 167
Nov 29 01:18:41 np0005539508 systemd[1]: libpod-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope: Deactivated successfully.
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.82715466 +0000 UTC m=+0.276788203 container attach 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.828036435 +0000 UTC m=+0.277669968 container died 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:41 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5bff137a26f019947d08e7348c43affbf17661e051a393a05cb96d0c48b33894-merged.mount: Deactivated successfully.
Nov 29 01:18:41 np0005539508 podman[86400]: 2025-11-29 06:18:41.925150807 +0000 UTC m=+0.374784340 container remove 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:41 np0005539508 systemd[1]: libpod-conmon-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope: Deactivated successfully.
Nov 29 01:18:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:42 np0005539508 podman[86439]: 2025-11-29 06:18:42.135329915 +0000 UTC m=+0.092457381 container create b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:18:42 np0005539508 podman[86439]: 2025-11-29 06:18:42.068316699 +0000 UTC m=+0.025444245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:18:42 np0005539508 systemd[1]: Started libpod-conmon-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope.
Nov 29 01:18:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:42 np0005539508 podman[86439]: 2025-11-29 06:18:42.378730288 +0000 UTC m=+0.335857754 container init b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:18:42 np0005539508 podman[86439]: 2025-11-29 06:18:42.388319701 +0000 UTC m=+0.345447177 container start b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:18:42 np0005539508 podman[86439]: 2025-11-29 06:18:42.422071761 +0000 UTC m=+0.379199237 container attach b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:18:42 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:42 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:42 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:42 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 01:18:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]: [
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:    {
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "available": false,
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "ceph_device": false,
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "lsm_data": {},
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "lvs": [],
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "path": "/dev/sr0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "rejected_reasons": [
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "Has a FileSystem",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "Insufficient space (<5GB)"
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        ],
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        "sys_api": {
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "actuators": null,
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "device_nodes": "sr0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "devname": "sr0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "human_readable_size": "482.00 KB",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "id_bus": "ata",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "model": "QEMU DVD-ROM",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "nr_requests": "2",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "parent": "/dev/sr0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "partitions": {},
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "path": "/dev/sr0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "removable": "1",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "rev": "2.5+",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "ro": "0",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "rotational": "1",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "sas_address": "",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "sas_device_handle": "",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "scheduler_mode": "mq-deadline",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "sectors": 0,
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "sectorsize": "2048",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "size": 493568.0,
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "support_discard": "2048",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "type": "disk",
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:            "vendor": "QEMU"
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:        }
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]:    }
Nov 29 01:18:44 np0005539508 optimistic_chandrasekhar[86455]: ]
Nov 29 01:18:44 np0005539508 systemd[1]: libpod-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Deactivated successfully.
Nov 29 01:18:44 np0005539508 podman[86439]: 2025-11-29 06:18:44.154258879 +0000 UTC m=+2.111386345 container died b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:44 np0005539508 systemd[1]: libpod-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Consumed 1.781s CPU time.
Nov 29 01:18:44 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:44 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:44 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46-merged.mount: Deactivated successfully.
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:44 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:44 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: OSD bench result of 3033.995593 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:18:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:45 np0005539508 python3[87571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:45 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 01:18:45 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 01:18:46 np0005539508 podman[86439]: 2025-11-29 06:18:46.022719114 +0000 UTC m=+3.979846620 container remove b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:18:46 np0005539508 systemd[1]: libpod-conmon-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Deactivated successfully.
Nov 29 01:18:46 np0005539508 podman[87573]: 2025-11-29 06:18:46.068257659 +0000 UTC m=+0.471150752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:46 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429] boot
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Nov 29 01:18:46 np0005539508 podman[87573]: 2025-11-29 06:18:46.487743991 +0000 UTC m=+0.890637054 container create 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:46 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:46 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:46 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:46 np0005539508 ceph-mon[74654]: Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:18:47 np0005539508 systemd[1]: Started libpod-conmon-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope.
Nov 29 01:18:47 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:47 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:47 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:47 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 podman[87573]: 2025-11-29 06:18:47.600096988 +0000 UTC m=+2.002990091 container init 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:18:47 np0005539508 podman[87573]: 2025-11-29 06:18:47.614827347 +0000 UTC m=+2.017720450 container start 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:18:47 np0005539508 podman[87573]: 2025-11-29 06:18:47.654750093 +0000 UTC m=+2.057643156 container attach 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429] boot
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.751 iops: 704.243 elapsed_sec: 4.260
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: log_channel(cluster) log [WRN] : OSD bench result of 704.243090 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 0 waiting for initial osdmap
Nov 29 01:18:48 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:48.119+0000 7f4f389ba640 -1 osd.1 0 waiting for initial osdmap
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 01:18:48 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:48.164+0000 7f4f33fe2640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 set_numa_affinity not setting numa affinity
Nov 29 01:18:48 np0005539508 ceph-osd[85162]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154088777' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 01:18:48 np0005539508 naughty_shamir[87589]: 
Nov 29 01:18:48 np0005539508 naughty_shamir[87589]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"CEPHADM_REFRESH_FAILED":{"severity":"HEALTH_WARN","summary":{"message":"failed to probe daemons or devices","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":161,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":2,"num_up_osds":1,"osd_up_since":1764397124,"num_in_osds":2,"osd_in_since":1764397101,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{}}
Nov 29 01:18:48 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] creating mgr pool
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 01:18:48 np0005539508 systemd[1]: libpod-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope: Deactivated successfully.
Nov 29 01:18:48 np0005539508 podman[87573]: 2025-11-29 06:18:48.304559095 +0000 UTC m=+2.707452198 container died 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:18:48 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:48 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:48 np0005539508 systemd[1]: var-lib-containers-storage-overlay-155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c-merged.mount: Deactivated successfully.
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 01:18:49 np0005539508 ceph-osd[85162]: osd.1 9 tick checking mon for new map
Nov 29 01:18:49 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:49 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 01:18:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Nov 29 01:18:49 np0005539508 podman[87573]: 2025-11-29 06:18:49.869038584 +0000 UTC m=+4.271931677 container remove 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:18:49 np0005539508 systemd[1]: libpod-conmon-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope: Deactivated successfully.
Nov 29 01:18:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v60: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: OSD bench result of 704.243090 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 01:18:50 np0005539508 ceph-osd[85162]: osd.1 10 state: booting -> active
Nov 29 01:18:50 np0005539508 ceph-osd[85162]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 01:18:50 np0005539508 ceph-osd[85162]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 01:18:50 np0005539508 ceph-osd[85162]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432] boot
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 01:18:50 np0005539508 python3[87656]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:50 np0005539508 podman[87657]: 2025-11-29 06:18:50.592238443 +0000 UTC m=+0.055084748 container create bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:18:50 np0005539508 podman[87657]: 2025-11-29 06:18:50.563772104 +0000 UTC m=+0.026618469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:50 np0005539508 systemd[1]: Started libpod-conmon-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope.
Nov 29 01:18:50 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:50 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:50 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 01:18:50 np0005539508 podman[87657]: 2025-11-29 06:18:50.808415082 +0000 UTC m=+0.271261397 container init bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:50 np0005539508 podman[87657]: 2025-11-29 06:18:50.814969938 +0000 UTC m=+0.277816243 container start bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:18:51 np0005539508 podman[87657]: 2025-11-29 06:18:51.13178245 +0000 UTC m=+0.594628825 container attach bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432] boot
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:18:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 01:18:52 np0005539508 interesting_lumiere[87672]: pool 'vms' created
Nov 29 01:18:52 np0005539508 systemd[1]: libpod-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope: Deactivated successfully.
Nov 29 01:18:52 np0005539508 podman[87657]: 2025-11-29 06:18:52.769456771 +0000 UTC m=+2.232303066 container died bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 29 01:18:52 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 01:18:52 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 01:18:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 01:18:53 np0005539508 systemd[1]: var-lib-containers-storage-overlay-13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b-merged.mount: Deactivated successfully.
Nov 29 01:18:53 np0005539508 podman[87657]: 2025-11-29 06:18:53.124029956 +0000 UTC m=+2.586876271 container remove bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:18:53 np0005539508 systemd[1]: libpod-conmon-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope: Deactivated successfully.
Nov 29 01:18:53 np0005539508 python3[87753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:53 np0005539508 podman[87754]: 2025-11-29 06:18:53.515151181 +0000 UTC m=+0.032608309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:53 np0005539508 podman[87754]: 2025-11-29 06:18:53.637995135 +0000 UTC m=+0.155452183 container create f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 01:18:53 np0005539508 systemd[1]: Started libpod-conmon-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope.
Nov 29 01:18:53 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 01:18:53 np0005539508 podman[87754]: 2025-11-29 06:18:53.798222262 +0000 UTC m=+0.315679400 container init f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:18:53 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 29 01:18:53 np0005539508 podman[87754]: 2025-11-29 06:18:53.803704318 +0000 UTC m=+0.321161366 container start f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:18:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v66: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 01:18:54 np0005539508 podman[87754]: 2025-11-29 06:18:54.03968624 +0000 UTC m=+0.557143338 container attach f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:18:54
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 01:18:54 np0005539508 zen_hellman[87769]: pool 'volumes' created
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:54 np0005539508 systemd[1]: libpod-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope: Deactivated successfully.
Nov 29 01:18:54 np0005539508 podman[87754]: 2025-11-29 06:18:54.899531405 +0000 UTC m=+1.416988493 container died f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 0 seconds
Nov 29 01:18:54 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 3 completed events
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:18:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:54 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14-merged.mount: Deactivated successfully.
Nov 29 01:18:54 np0005539508 podman[87754]: 2025-11-29 06:18:54.963646359 +0000 UTC m=+1.481103437 container remove f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:18:54 np0005539508 systemd[1]: libpod-conmon-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope: Deactivated successfully.
Nov 29 01:18:55 np0005539508 python3[87834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:55 np0005539508 podman[87835]: 2025-11-29 06:18:55.451288059 +0000 UTC m=+0.041009177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:55 np0005539508 podman[87835]: 2025-11-29 06:18:55.81272956 +0000 UTC m=+0.402450688 container create 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:18:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:18:55 np0005539508 systemd[1]: Started libpod-conmon-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope.
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:18:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 01:18:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:18:55 np0005539508 podman[87835]: 2025-11-29 06:18:55.957453876 +0000 UTC m=+0.547174984 container init 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 01:18:55 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 29 01:18:55 np0005539508 podman[87835]: 2025-11-29 06:18:55.96602135 +0000 UTC m=+0.555742428 container start 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:18:55 np0005539508 podman[87835]: 2025-11-29 06:18:55.970464236 +0000 UTC m=+0.560185324 container attach 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:18:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:18:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:18:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 01:18:57 np0005539508 recursing_blackwell[87850]: pool 'backups' created
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 29 01:18:57 np0005539508 systemd[1]: libpod-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope: Deactivated successfully.
Nov 29 01:18:57 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:18:57 np0005539508 podman[87835]: 2025-11-29 06:18:57.398105883 +0000 UTC m=+1.987826971 container died 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:18:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d-merged.mount: Deactivated successfully.
Nov 29 01:18:57 np0005539508 podman[87835]: 2025-11-29 06:18:57.761026874 +0000 UTC m=+2.350747962 container remove 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:57 np0005539508 systemd[1]: libpod-conmon-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope: Deactivated successfully.
Nov 29 01:18:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:18:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:18:58 np0005539508 python3[87915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:58 np0005539508 podman[87916]: 2025-11-29 06:18:58.089326382 +0000 UTC m=+0.020018611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:18:58 np0005539508 podman[87916]: 2025-11-29 06:18:58.205138996 +0000 UTC m=+0.135831245 container create fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:18:58 np0005539508 systemd[1]: Started libpod-conmon-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope.
Nov 29 01:18:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:18:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:18:58 np0005539508 podman[87916]: 2025-11-29 06:18:58.341512895 +0000 UTC m=+0.272205194 container init fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:58 np0005539508 podman[87916]: 2025-11-29 06:18:58.349844392 +0000 UTC m=+0.280536641 container start fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:18:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 01:18:58 np0005539508 podman[87916]: 2025-11-29 06:18:58.548118301 +0000 UTC m=+0.478810550 container attach fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:18:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:18:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 01:18:58 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 29 01:18:59 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 01:18:59 np0005539508 dazzling_goldwasser[87932]: pool 'images' created
Nov 29 01:18:59 np0005539508 systemd[1]: libpod-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope: Deactivated successfully.
Nov 29 01:18:59 np0005539508 podman[87916]: 2025-11-29 06:18:59.741276719 +0000 UTC m=+1.671968968 container died fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:18:59 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Nov 29 01:18:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v74: 36 pgs: 2 active+clean, 34 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:00 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:19:00 np0005539508 systemd[1]: var-lib-containers-storage-overlay-00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73-merged.mount: Deactivated successfully.
Nov 29 01:19:00 np0005539508 podman[87916]: 2025-11-29 06:19:00.613450857 +0000 UTC m=+2.544143066 container remove fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:00 np0005539508 systemd[1]: libpod-conmon-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope: Deactivated successfully.
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 01:19:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Nov 29 01:19:00 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:00 np0005539508 python3[87996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:01.087622234 +0000 UTC m=+0.132591753 container create e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:00.995414211 +0000 UTC m=+0.040383790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:01 np0005539508 systemd[76267]: Starting Mark boot as successful...
Nov 29 01:19:01 np0005539508 systemd[76267]: Finished Mark boot as successful.
Nov 29 01:19:01 np0005539508 systemd[1]: Started libpod-conmon-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope.
Nov 29 01:19:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:01.23309285 +0000 UTC m=+0.278062419 container init e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:01.239325578 +0000 UTC m=+0.284295107 container start e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:01.265323997 +0000 UTC m=+0.310293486 container attach e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 01:19:01 np0005539508 sad_wing[88013]: pool 'cephfs.cephfs.meta' created
Nov 29 01:19:01 np0005539508 systemd[1]: libpod-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope: Deactivated successfully.
Nov 29 01:19:01 np0005539508 podman[87997]: 2025-11-29 06:19:01.834045103 +0000 UTC m=+0.879014632 container died e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:01 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Nov 29 01:19:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v77: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b-merged.mount: Deactivated successfully.
Nov 29 01:19:02 np0005539508 podman[87997]: 2025-11-29 06:19:02.170660777 +0000 UTC m=+1.215630276 container remove e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:02 np0005539508 systemd[1]: libpod-conmon-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope: Deactivated successfully.
Nov 29 01:19:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:02 np0005539508 python3[88077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:02 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:19:02 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:19:02 np0005539508 podman[88078]: 2025-11-29 06:19:02.563259614 +0000 UTC m=+0.051179527 container create 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:19:02 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:02 np0005539508 systemd[1]: Started libpod-conmon-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope.
Nov 29 01:19:02 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:02 np0005539508 podman[88078]: 2025-11-29 06:19:02.54166323 +0000 UTC m=+0.029583153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:02 np0005539508 podman[88078]: 2025-11-29 06:19:02.637749063 +0000 UTC m=+0.125668996 container init 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:19:02 np0005539508 podman[88078]: 2025-11-29 06:19:02.645448772 +0000 UTC m=+0.133368685 container start 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:19:02 np0005539508 podman[88078]: 2025-11-29 06:19:02.6482059 +0000 UTC m=+0.136125803 container attach 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 01:19:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:19:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 01:19:03 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 01:19:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v78: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:19:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 01:19:04 np0005539508 nifty_murdock[88093]: pool 'cephfs.cephfs.data' created
Nov 29 01:19:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Nov 29 01:19:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:04 np0005539508 systemd[1]: libpod-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope: Deactivated successfully.
Nov 29 01:19:04 np0005539508 podman[88078]: 2025-11-29 06:19:04.423480235 +0000 UTC m=+1.911400158 container died 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:19:04 np0005539508 systemd[1]: var-lib-containers-storage-overlay-ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa-merged.mount: Deactivated successfully.
Nov 29 01:19:04 np0005539508 podman[88078]: 2025-11-29 06:19:04.584165135 +0000 UTC m=+2.072085048 container remove 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:19:04 np0005539508 systemd[1]: libpod-conmon-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope: Deactivated successfully.
Nov 29 01:19:04 np0005539508 python3[88156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:04 np0005539508 podman[88157]: 2025-11-29 06:19:04.980210158 +0000 UTC m=+0.083846956 container create 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:19:05 np0005539508 systemd[1]: Started libpod-conmon-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope.
Nov 29 01:19:05 np0005539508 podman[88157]: 2025-11-29 06:19:04.934589811 +0000 UTC m=+0.038226689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:05 np0005539508 podman[88157]: 2025-11-29 06:19:05.071856085 +0000 UTC m=+0.175492933 container init 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:19:05 np0005539508 podman[88157]: 2025-11-29 06:19:05.08294034 +0000 UTC m=+0.186577138 container start 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:19:05 np0005539508 podman[88157]: 2025-11-29 06:19:05.087006906 +0000 UTC m=+0.190643694 container attach 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 01:19:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 01:19:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v81: 38 pgs: 1 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 01:19:06 np0005539508 ecstatic_gauss[88172]: enabled application 'rbd' on pool 'vms'
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 01:19:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Nov 29 01:19:06 np0005539508 systemd[1]: libpod-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope: Deactivated successfully.
Nov 29 01:19:06 np0005539508 podman[88157]: 2025-11-29 06:19:06.441580115 +0000 UTC m=+1.545216923 container died 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:06 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506-merged.mount: Deactivated successfully.
Nov 29 01:19:06 np0005539508 podman[88157]: 2025-11-29 06:19:06.495709744 +0000 UTC m=+1.599346562 container remove 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 01:19:06 np0005539508 systemd[1]: libpod-conmon-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope: Deactivated successfully.
Nov 29 01:19:06 np0005539508 python3[88236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:06 np0005539508 podman[88237]: 2025-11-29 06:19:06.954255607 +0000 UTC m=+0.067834271 container create 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:06 np0005539508 systemd[1]: Started libpod-conmon-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope.
Nov 29 01:19:07 np0005539508 podman[88237]: 2025-11-29 06:19:06.927749033 +0000 UTC m=+0.041327787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:07 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:07 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:07 np0005539508 podman[88237]: 2025-11-29 06:19:07.043458754 +0000 UTC m=+0.157037508 container init 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 01:19:07 np0005539508 podman[88237]: 2025-11-29 06:19:07.053088688 +0000 UTC m=+0.166667382 container start 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 01:19:07 np0005539508 podman[88237]: 2025-11-29 06:19:07.056977219 +0000 UTC m=+0.170555923 container attach 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 01:19:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v83: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:19:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 01:19:08 np0005539508 zealous_hoover[88252]: enabled application 'rbd' on pool 'volumes'
Nov 29 01:19:08 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Nov 29 01:19:08 np0005539508 systemd[1]: libpod-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope: Deactivated successfully.
Nov 29 01:19:08 np0005539508 podman[88277]: 2025-11-29 06:19:08.546076923 +0000 UTC m=+0.026441303 container died 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:19:08 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca-merged.mount: Deactivated successfully.
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:19:09 np0005539508 podman[88277]: 2025-11-29 06:19:09.362468694 +0000 UTC m=+0.842833044 container remove 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:19:09 np0005539508 systemd[1]: libpod-conmon-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope: Deactivated successfully.
Nov 29 01:19:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 01:19:09 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 01:19:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:19:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 01:19:09 np0005539508 python3[88317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:09 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:19:09 np0005539508 podman[88318]: 2025-11-29 06:19:09.854300103 +0000 UTC m=+0.046004820 container create ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 01:19:09 np0005539508 systemd[1]: Started libpod-conmon-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope.
Nov 29 01:19:09 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:09 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:09 np0005539508 podman[88318]: 2025-11-29 06:19:09.835836297 +0000 UTC m=+0.027541054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:09 np0005539508 podman[88318]: 2025-11-29 06:19:09.935445221 +0000 UTC m=+0.127149958 container init ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v86: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:09 np0005539508 podman[88318]: 2025-11-29 06:19:09.944429426 +0000 UTC m=+0.136134153 container start ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:19:09 np0005539508 podman[88318]: 2025-11-29 06:19:09.948639316 +0000 UTC m=+0.140344053 container attach ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 01:19:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 01:19:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 01:19:11 np0005539508 determined_lichterman[88334]: enabled application 'rbd' on pool 'backups'
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Nov 29 01:19:11 np0005539508 systemd[1]: libpod-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope: Deactivated successfully.
Nov 29 01:19:11 np0005539508 podman[88318]: 2025-11-29 06:19:11.077612618 +0000 UTC m=+1.269317375 container died ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 01:19:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec-merged.mount: Deactivated successfully.
Nov 29 01:19:11 np0005539508 podman[88318]: 2025-11-29 06:19:11.401462569 +0000 UTC m=+1.593167296 container remove ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 01:19:11 np0005539508 systemd[1]: libpod-conmon-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope: Deactivated successfully.
Nov 29 01:19:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 01:19:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:19:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:19:11 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:19:11 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:19:11 np0005539508 python3[88402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:11 np0005539508 podman[88403]: 2025-11-29 06:19:11.861453993 +0000 UTC m=+0.061170171 container create 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:19:11 np0005539508 systemd[1]: Started libpod-conmon-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope.
Nov 29 01:19:11 np0005539508 podman[88403]: 2025-11-29 06:19:11.833340203 +0000 UTC m=+0.033056471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:11 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:11 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:11 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v88: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:11 np0005539508 podman[88403]: 2025-11-29 06:19:11.943726103 +0000 UTC m=+0.143442331 container init 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:19:11 np0005539508 podman[88403]: 2025-11-29 06:19:11.953691866 +0000 UTC m=+0.153408054 container start 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:19:11 np0005539508 podman[88403]: 2025-11-29 06:19:11.958118892 +0000 UTC m=+0.157835090 container attach 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 01:19:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 01:19:12 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:19:12 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 01:19:13 np0005539508 quizzical_wozniak[88419]: enabled application 'rbd' on pool 'images'
Nov 29 01:19:13 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Nov 29 01:19:13 np0005539508 systemd[1]: libpod-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope: Deactivated successfully.
Nov 29 01:19:13 np0005539508 podman[88444]: 2025-11-29 06:19:13.423775789 +0000 UTC m=+0.043966022 container died 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f-merged.mount: Deactivated successfully.
Nov 29 01:19:13 np0005539508 podman[88444]: 2025-11-29 06:19:13.486608996 +0000 UTC m=+0.106799189 container remove 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:13 np0005539508 systemd[1]: libpod-conmon-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope: Deactivated successfully.
Nov 29 01:19:13 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:19:13 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:19:13 np0005539508 python3[88486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:13 np0005539508 podman[88489]: 2025-11-29 06:19:13.929210445 +0000 UTC m=+0.058889726 container create d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v90: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:13 np0005539508 systemd[1]: Started libpod-conmon-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope.
Nov 29 01:19:13 np0005539508 podman[88489]: 2025-11-29 06:19:13.89879055 +0000 UTC m=+0.028469911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:13 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:14 np0005539508 podman[88489]: 2025-11-29 06:19:14.141277147 +0000 UTC m=+0.270956548 container init d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:14 np0005539508 podman[88489]: 2025-11-29 06:19:14.148006878 +0000 UTC m=+0.277686199 container start d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:19:14 np0005539508 podman[88489]: 2025-11-29 06:19:14.154340219 +0000 UTC m=+0.284019590 container attach d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 01:19:14 np0005539508 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:19:14 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 01:19:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 01:19:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 01:19:14 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:19:14 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 01:19:15 np0005539508 kind_shockley[88504]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Nov 29 01:19:15 np0005539508 systemd[1]: libpod-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope: Deactivated successfully.
Nov 29 01:19:15 np0005539508 podman[88489]: 2025-11-29 06:19:15.385459155 +0000 UTC m=+1.515138436 container died d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:19:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c-merged.mount: Deactivated successfully.
Nov 29 01:19:15 np0005539508 podman[88489]: 2025-11-29 06:19:15.661282139 +0000 UTC m=+1.790961410 container remove d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:19:15 np0005539508 systemd[1]: libpod-conmon-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope: Deactivated successfully.
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v92: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v93: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v94: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3))
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 29 01:19:15 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 29 01:19:15 np0005539508 python3[88570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:16 np0005539508 podman[88571]: 2025-11-29 06:19:16.06986467 +0000 UTC m=+0.063947810 container create 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:19:16 np0005539508 systemd[1]: Started libpod-conmon-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope.
Nov 29 01:19:16 np0005539508 podman[88571]: 2025-11-29 06:19:16.03295365 +0000 UTC m=+0.027036820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:16 np0005539508 podman[88571]: 2025-11-29 06:19:16.397615272 +0000 UTC m=+0.391698432 container init 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:16 np0005539508 podman[88571]: 2025-11-29 06:19:16.40634438 +0000 UTC m=+0.400427520 container start 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:19:16 np0005539508 podman[88571]: 2025-11-29 06:19:16.420677848 +0000 UTC m=+0.414761008 container attach 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 01:19:16 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices)
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: Deploying daemon mon.compute-2 on compute-2
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: Health check cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices)
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 01:19:17 np0005539508 hopeful_lumiere[88587]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 01:19:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 01:19:17 np0005539508 systemd[1]: libpod-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope: Deactivated successfully.
Nov 29 01:19:17 np0005539508 podman[88571]: 2025-11-29 06:19:17.453216516 +0000 UTC m=+1.447299656 container died 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:19:17 np0005539508 systemd[1]: var-lib-containers-storage-overlay-2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113-merged.mount: Deactivated successfully.
Nov 29 01:19:17 np0005539508 podman[88571]: 2025-11-29 06:19:17.491131814 +0000 UTC m=+1.485214954 container remove 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:19:17 np0005539508 systemd[1]: libpod-conmon-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope: Deactivated successfully.
Nov 29 01:19:17 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 01:19:17 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 01:19:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v96: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 01:19:18 np0005539508 python3[88697]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 29 01:19:18 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 29 01:19:18 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:18 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:18 np0005539508 python3[88768]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397158.1797266-37397-196526841392482/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:19 np0005539508 python3[88870]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:19 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:19 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v97: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:20 np0005539508 python3[88945]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397159.170986-37411-170038243516047/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b7a9aa9ffd1d96f069d7e387f055c8a3b711590d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:20 np0005539508 python3[88995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:20 np0005539508 podman[88996]: 2025-11-29 06:19:20.530697307 +0000 UTC m=+0.078949356 container create a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:19:20 np0005539508 systemd[1]: Started libpod-conmon-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope.
Nov 29 01:19:20 np0005539508 podman[88996]: 2025-11-29 06:19:20.496504625 +0000 UTC m=+0.044756734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:20 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:20 np0005539508 podman[88996]: 2025-11-29 06:19:20.627347616 +0000 UTC m=+0.175599725 container init a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:20 np0005539508 podman[88996]: 2025-11-29 06:19:20.63803416 +0000 UTC m=+0.186286169 container start a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:19:20 np0005539508 podman[88996]: 2025-11-29 06:19:20.641596352 +0000 UTC m=+0.189848391 container attach a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 01:19:20 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:20 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 01:19:21 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:21 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 01:19:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 01:19:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v98: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 01:19:22 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:22 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 01:19:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 29 01:19:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 29 01:19:22 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:22 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:23 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:23 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 01:19:23 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:23 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:19:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v99: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: Deploying daemon mon.compute-1 on compute-1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3))
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3)) in 8 seconds
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3))
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: mgr.server handle_report got status from non-daemon mon.compute-2
Nov 29 01:19:24 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:19:24.763+0000 7f90f1cf5640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Nov 29 01:19:24 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 4 completed events
Nov 29 01:19:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:19:25 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:25 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v100: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:26 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:26 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 01:19:26 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 01:19:26 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 01:19:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 01:19:27 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:27 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 01:19:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v101: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:28 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:28 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:29 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:29 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 01:19:29 np0005539508 romantic_almeida[89012]: 
Nov 29 01:19:29 np0005539508 romantic_almeida[89012]: [global]
Nov 29 01:19:29 np0005539508 romantic_almeida[89012]: #011fsid = 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 01:19:29 np0005539508 romantic_almeida[89012]: #011mon_host = 192.168.122.100
Nov 29 01:19:29 np0005539508 systemd[1]: libpod-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope: Deactivated successfully.
Nov 29 01:19:29 np0005539508 podman[88996]: 2025-11-29 06:19:29.403711649 +0000 UTC m=+8.951963658 container died a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:29 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b-merged.mount: Deactivated successfully.
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:19:29 np0005539508 podman[88996]: 2025-11-29 06:19:29.574017502 +0000 UTC m=+9.122269521 container remove a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:29 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 01:19:29 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 01:19:29 np0005539508 systemd[1]: libpod-conmon-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope: Deactivated successfully.
Nov 29 01:19:29 np0005539508 python3[89076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v102: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.025144072 +0000 UTC m=+0.076453376 container create cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:19:30 np0005539508 systemd[1]: Started libpod-conmon-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope.
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:29.995140913 +0000 UTC m=+0.046450227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:30 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 01:19:30 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.141365013 +0000 UTC m=+0.192674367 container init cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.154645557 +0000 UTC m=+0.205954861 container start cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.159728097 +0000 UTC m=+0.211037371 container attach cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 01:19:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2714267067' entity='client.admin' 
Nov 29 01:19:30 np0005539508 upbeat_solomon[89093]: set ssl_option
Nov 29 01:19:30 np0005539508 systemd[1]: libpod-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope: Deactivated successfully.
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.876241007 +0000 UTC m=+0.927550311 container died cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:30 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7-merged.mount: Deactivated successfully.
Nov 29 01:19:30 np0005539508 podman[89077]: 2025-11-29 06:19:30.939581722 +0000 UTC m=+0.990890996 container remove cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:30 np0005539508 systemd[1]: libpod-conmon-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope: Deactivated successfully.
Nov 29 01:19:31 np0005539508 ceph-mgr[74948]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 01:19:31 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:19:31.104+0000 7f90f1cf5640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 01:19:31 np0005539508 python3[89157]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:31 np0005539508 podman[89158]: 2025-11-29 06:19:31.391777574 +0000 UTC m=+0.051468935 container create 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:19:31 np0005539508 systemd[1]: Started libpod-conmon-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope.
Nov 29 01:19:31 np0005539508 podman[89158]: 2025-11-29 06:19:31.367450754 +0000 UTC m=+0.027142115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:31 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:31 np0005539508 podman[89158]: 2025-11-29 06:19:31.492982931 +0000 UTC m=+0.152674282 container init 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:19:31 np0005539508 podman[89158]: 2025-11-29 06:19:31.50137497 +0000 UTC m=+0.161066321 container start 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:19:31 np0005539508 podman[89158]: 2025-11-29 06:19:31.505251435 +0000 UTC m=+0.164942796 container attach 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:19:31 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 01:19:31 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2714267067' entity='client.admin' 
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:19:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v103: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3))
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3))
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 nifty_nobel[89172]: Scheduled rgw.rgw update...
Nov 29 01:19:32 np0005539508 nifty_nobel[89172]: Scheduled ingress.rgw.default update...
Nov 29 01:19:32 np0005539508 systemd[1]: libpod-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope: Deactivated successfully.
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 29 01:19:32 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 29 01:19:32 np0005539508 podman[89199]: 2025-11-29 06:19:32.493076299 +0000 UTC m=+0.028440133 container died 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 01:19:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 01:19:32 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971-merged.mount: Deactivated successfully.
Nov 29 01:19:32 np0005539508 podman[89199]: 2025-11-29 06:19:32.575079197 +0000 UTC m=+0.110443011 container remove 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:32 np0005539508 systemd[1]: libpod-conmon-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope: Deactivated successfully.
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 01:19:32 np0005539508 ceph-mon[74654]: Deploying daemon crash.compute-2 on compute-2
Nov 29 01:19:33 np0005539508 python3[89289]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v104: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:34 np0005539508 python3[89360]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397173.487423-37452-84494872252177/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:34 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3))
Nov 29 01:19:34 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3)) in 2 seconds
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:34 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 6 completed events
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:19:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:34 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 01:19:34 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 01:19:34 np0005539508 python3[89509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:34 np0005539508 podman[89518]: 2025-11-29 06:19:34.896424742 +0000 UTC m=+0.050821376 container create 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:34 np0005539508 systemd[1]: Started libpod-conmon-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope.
Nov 29 01:19:34 np0005539508 podman[89518]: 2025-11-29 06:19:34.875867774 +0000 UTC m=+0.030264528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:34 np0005539508 podman[89518]: 2025-11-29 06:19:34.99360632 +0000 UTC m=+0.148002974 container init 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:19:35 np0005539508 podman[89518]: 2025-11-29 06:19:35.001553606 +0000 UTC m=+0.155950270 container start 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:35 np0005539508 podman[89518]: 2025-11-29 06:19:35.00541707 +0000 UTC m=+0.159813694 container attach 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.158556035 +0000 UTC m=+0.064334476 container create 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:19:35 np0005539508 systemd[1]: Started libpod-conmon-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope.
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.12223309 +0000 UTC m=+0.028011611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.259581387 +0000 UTC m=+0.165359918 container init 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.271278974 +0000 UTC m=+0.177057445 container start 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.276280822 +0000 UTC m=+0.182059303 container attach 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:19:35 np0005539508 nice_curran[89587]: 167 167
Nov 29 01:19:35 np0005539508 systemd[1]: libpod-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope: Deactivated successfully.
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.278070305 +0000 UTC m=+0.183848786 container died 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:19:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5fb694ba32fff9729ab004e5b26fc1098cc241cdb47b96d9bfc4fedcb78256ad-merged.mount: Deactivated successfully.
Nov 29 01:19:35 np0005539508 podman[89570]: 2025-11-29 06:19:35.336134454 +0000 UTC m=+0.241912895 container remove 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 01:19:35 np0005539508 systemd[1]: libpod-conmon-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope: Deactivated successfully.
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 01:19:35 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:35.588+0000 7fe455879640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 01:19:35 np0005539508 podman[89628]: 2025-11-29 06:19:35.552042118 +0000 UTC m=+0.038069758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:35 np0005539508 podman[89628]: 2025-11-29 06:19:35.667292051 +0000 UTC m=+0.153319621 container create ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e2 new map
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:19:35.589013+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:19:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 01:19:35 np0005539508 podman[89518]: 2025-11-29 06:19:35.715680404 +0000 UTC m=+0.870077078 container died 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:19:35 np0005539508 systemd[1]: Started libpod-conmon-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope.
Nov 29 01:19:35 np0005539508 systemd[1]: libpod-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope: Deactivated successfully.
Nov 29 01:19:35 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018-merged.mount: Deactivated successfully.
Nov 29 01:19:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:35 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:35 np0005539508 podman[89518]: 2025-11-29 06:19:35.827942079 +0000 UTC m=+0.982338723 container remove 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:35 np0005539508 podman[89628]: 2025-11-29 06:19:35.834942456 +0000 UTC m=+0.320970016 container init ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:19:35 np0005539508 podman[89628]: 2025-11-29 06:19:35.845374055 +0000 UTC m=+0.331401645 container start ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:35 np0005539508 systemd[1]: libpod-conmon-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope: Deactivated successfully.
Nov 29 01:19:35 np0005539508 podman[89628]: 2025-11-29 06:19:35.850679272 +0000 UTC m=+0.336706822 container attach ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:19:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v106: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:36 np0005539508 python3[89690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:36 np0005539508 podman[89691]: 2025-11-29 06:19:36.350077672 +0000 UTC m=+0.073347833 container create 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:36 np0005539508 systemd[1]: Started libpod-conmon-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope.
Nov 29 01:19:36 np0005539508 podman[89691]: 2025-11-29 06:19:36.310572752 +0000 UTC m=+0.033842953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:36 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:36 np0005539508 podman[89691]: 2025-11-29 06:19:36.466201121 +0000 UTC m=+0.189471282 container init 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:36 np0005539508 podman[89691]: 2025-11-29 06:19:36.478438673 +0000 UTC m=+0.201708814 container start 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:19:36 np0005539508 podman[89691]: 2025-11-29 06:19:36.48237735 +0000 UTC m=+0.205647491 container attach 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"} v 0) v1
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]': finished
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:36 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:36 np0005539508 sharp_blackburn[89653]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:19:36 np0005539508 sharp_blackburn[89653]: --> relative data size: 1.0
Nov 29 01:19:36 np0005539508 sharp_blackburn[89653]: --> All data devices are unavailable
Nov 29 01:19:36 np0005539508 systemd[1]: libpod-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope: Deactivated successfully.
Nov 29 01:19:36 np0005539508 conmon[89653]: conmon ad247c9dcb3742a3aa50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope/container/memory.events
Nov 29 01:19:36 np0005539508 podman[89628]: 2025-11-29 06:19:36.778540591 +0000 UTC m=+1.264568141 container died ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:19:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890-merged.mount: Deactivated successfully.
Nov 29 01:19:36 np0005539508 podman[89628]: 2025-11-29 06:19:36.830269983 +0000 UTC m=+1.316297523 container remove ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:36 np0005539508 systemd[1]: libpod-conmon-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope: Deactivated successfully.
Nov 29 01:19:37 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:19:37 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:37 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:37 np0005539508 objective_hopper[89706]: Scheduled mds.cephfs update...
Nov 29 01:19:37 np0005539508 systemd[1]: libpod-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope: Deactivated successfully.
Nov 29 01:19:37 np0005539508 podman[89691]: 2025-11-29 06:19:37.135638635 +0000 UTC m=+0.858908786 container died 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:37 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201-merged.mount: Deactivated successfully.
Nov 29 01:19:37 np0005539508 podman[89691]: 2025-11-29 06:19:37.185588414 +0000 UTC m=+0.908858535 container remove 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:19:37 np0005539508 systemd[1]: libpod-conmon-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope: Deactivated successfully.
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/2624547066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]': finished
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.423096748 +0000 UTC m=+0.054039981 container create 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:19:37 np0005539508 systemd[1]: Started libpod-conmon-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope.
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.397027846 +0000 UTC m=+0.027971109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:37 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.509724984 +0000 UTC m=+0.140668197 container init 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.521452841 +0000 UTC m=+0.152396044 container start 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 01:19:37 np0005539508 interesting_aryabhata[89922]: 167 167
Nov 29 01:19:37 np0005539508 systemd[1]: libpod-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope: Deactivated successfully.
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.524603244 +0000 UTC m=+0.155546447 container attach 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.526207582 +0000 UTC m=+0.157150825 container died 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:37 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cc479b3bf604959897f332ae3979339e885dcbc5f4068eeb0a75ccf766932956-merged.mount: Deactivated successfully.
Nov 29 01:19:37 np0005539508 podman[89906]: 2025-11-29 06:19:37.562386053 +0000 UTC m=+0.193329266 container remove 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:19:37 np0005539508 systemd[1]: libpod-conmon-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope: Deactivated successfully.
Nov 29 01:19:37 np0005539508 podman[89987]: 2025-11-29 06:19:37.782931704 +0000 UTC m=+0.065584433 container create 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:19:37 np0005539508 systemd[1]: Started libpod-conmon-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope.
Nov 29 01:19:37 np0005539508 podman[89987]: 2025-11-29 06:19:37.760593233 +0000 UTC m=+0.043245982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:37 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:37 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:37 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:37 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:37 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:37 np0005539508 podman[89987]: 2025-11-29 06:19:37.894808418 +0000 UTC m=+0.177461127 container init 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:19:37 np0005539508 podman[89987]: 2025-11-29 06:19:37.904610858 +0000 UTC m=+0.187263577 container start 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:37 np0005539508 podman[89987]: 2025-11-29 06:19:37.919932742 +0000 UTC m=+0.202585431 container attach 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:19:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v108: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:37 np0005539508 python3[90041]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:19:38 np0005539508 python3[90117]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397177.6429858-37482-104583698948214/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=d5bc1b1c0617b147c8e3e13846b179249a244079 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:19:38 np0005539508 ceph-mon[74654]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]: {
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:    "1": [
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:        {
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "devices": [
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "/dev/loop3"
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            ],
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "lv_name": "ceph_lv0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "lv_size": "7511998464",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "name": "ceph_lv0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "tags": {
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.cluster_name": "ceph",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.crush_device_class": "",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.encrypted": "0",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.osd_id": "1",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.type": "block",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:                "ceph.vdo": "0"
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            },
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "type": "block",
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:            "vg_name": "ceph_vg0"
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:        }
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]:    ]
Nov 29 01:19:38 np0005539508 peaceful_robinson[90039]: }
Nov 29 01:19:38 np0005539508 systemd[1]: libpod-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope: Deactivated successfully.
Nov 29 01:19:38 np0005539508 podman[89987]: 2025-11-29 06:19:38.817316788 +0000 UTC m=+1.099969487 container died 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 01:19:38 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2-merged.mount: Deactivated successfully.
Nov 29 01:19:38 np0005539508 podman[89987]: 2025-11-29 06:19:38.879503889 +0000 UTC m=+1.162156578 container remove 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:19:38 np0005539508 systemd[1]: libpod-conmon-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope: Deactivated successfully.
Nov 29 01:19:38 np0005539508 python3[90183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:39 np0005539508 podman[90211]: 2025-11-29 06:19:39.062358594 +0000 UTC m=+0.048557999 container create 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:19:39 np0005539508 systemd[1]: Started libpod-conmon-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope.
Nov 29 01:19:39 np0005539508 podman[90211]: 2025-11-29 06:19:39.035338204 +0000 UTC m=+0.021537639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 podman[90211]: 2025-11-29 06:19:39.168142857 +0000 UTC m=+0.154342272 container init 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:19:39 np0005539508 podman[90211]: 2025-11-29 06:19:39.178648288 +0000 UTC m=+0.164847683 container start 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:39 np0005539508 podman[90211]: 2025-11-29 06:19:39.182546494 +0000 UTC m=+0.168745889 container attach 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.558365183 +0000 UTC m=+0.047829567 container create 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:39 np0005539508 systemd[1]: Started libpod-conmon-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope.
Nov 29 01:19:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.62441614 +0000 UTC m=+0.113880544 container init 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.630320064 +0000 UTC m=+0.119784468 container start 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.633796447 +0000 UTC m=+0.123260911 container attach 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:19:39 np0005539508 gallant_carson[90380]: 167 167
Nov 29 01:19:39 np0005539508 systemd[1]: libpod-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope: Deactivated successfully.
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.635115296 +0000 UTC m=+0.124579690 container died 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.544462972 +0000 UTC m=+0.033927386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:39 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 01:19:39 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 01:19:39 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6c7396892fd46f12197eba32e357620c4e19b71aa9c00346ee76b0764c6abafc-merged.mount: Deactivated successfully.
Nov 29 01:19:39 np0005539508 podman[90354]: 2025-11-29 06:19:39.678917294 +0000 UTC m=+0.168381698 container remove 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:19:39 np0005539508 systemd[1]: libpod-conmon-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope: Deactivated successfully.
Nov 29 01:19:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 01:19:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 01:19:39 np0005539508 podman[90405]: 2025-11-29 06:19:39.867840628 +0000 UTC m=+0.040185971 container create 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:19:39 np0005539508 systemd[1]: Started libpod-conmon-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope.
Nov 29 01:19:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 podman[90405]: 2025-11-29 06:19:39.849339311 +0000 UTC m=+0.021684664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v109: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:40 np0005539508 podman[90405]: 2025-11-29 06:19:40.036149263 +0000 UTC m=+0.208494636 container init 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:19:40 np0005539508 podman[90405]: 2025-11-29 06:19:40.048416976 +0000 UTC m=+0.220762339 container start 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:19:40 np0005539508 podman[90405]: 2025-11-29 06:19:40.072715096 +0000 UTC m=+0.245060419 container attach 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 01:19:40 np0005539508 podman[90211]: 2025-11-29 06:19:40.655862255 +0000 UTC m=+1.642061650 container died 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:19:40 np0005539508 systemd[1]: libpod-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope: Deactivated successfully.
Nov 29 01:19:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72-merged.mount: Deactivated successfully.
Nov 29 01:19:40 np0005539508 podman[90211]: 2025-11-29 06:19:40.712650936 +0000 UTC m=+1.698850321 container remove 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:19:40 np0005539508 systemd[1]: libpod-conmon-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope: Deactivated successfully.
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 01:19:40 np0005539508 brave_einstein[90421]: {
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:        "osd_id": 1,
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:        "type": "bluestore"
Nov 29 01:19:40 np0005539508 brave_einstein[90421]:    }
Nov 29 01:19:40 np0005539508 brave_einstein[90421]: }
Nov 29 01:19:40 np0005539508 systemd[1]: libpod-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope: Deactivated successfully.
Nov 29 01:19:40 np0005539508 podman[90405]: 2025-11-29 06:19:40.881802066 +0000 UTC m=+1.054147399 container died 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60-merged.mount: Deactivated successfully.
Nov 29 01:19:40 np0005539508 podman[90405]: 2025-11-29 06:19:40.933477596 +0000 UTC m=+1.105822919 container remove 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:19:40 np0005539508 systemd[1]: libpod-conmon-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope: Deactivated successfully.
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:19:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:19:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:19:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:41 np0005539508 python3[90493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:41 np0005539508 podman[90495]: 2025-11-29 06:19:41.565225705 +0000 UTC m=+0.042908962 container create e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:19:41 np0005539508 systemd[1]: Started libpod-conmon-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope.
Nov 29 01:19:41 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:41 np0005539508 podman[90495]: 2025-11-29 06:19:41.63327137 +0000 UTC m=+0.110954677 container init e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:41 np0005539508 podman[90495]: 2025-11-29 06:19:41.640642038 +0000 UTC m=+0.118325315 container start e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:41 np0005539508 podman[90495]: 2025-11-29 06:19:41.549467208 +0000 UTC m=+0.027150485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:41 np0005539508 podman[90495]: 2025-11-29 06:19:41.644160873 +0000 UTC m=+0.121844180 container attach e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:19:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v110: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241390295' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 01:19:42 np0005539508 elegant_newton[90511]: 
Nov 29 01:19:42 np0005539508 elegant_newton[90511]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":12,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":2,"osd_up_since":1764397129,"num_in_osds":3,"osd_in_since":1764397176,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56020992,"bytes_avail":14967975936,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{}}
Nov 29 01:19:42 np0005539508 systemd[1]: libpod-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope: Deactivated successfully.
Nov 29 01:19:42 np0005539508 conmon[90511]: conmon e1f8a52f6ec7d42b84eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope/container/memory.events
Nov 29 01:19:42 np0005539508 podman[90495]: 2025-11-29 06:19:42.249037486 +0000 UTC m=+0.726720783 container died e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:19:42 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a-merged.mount: Deactivated successfully.
Nov 29 01:19:42 np0005539508 podman[90495]: 2025-11-29 06:19:42.30118031 +0000 UTC m=+0.778863577 container remove e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:19:42 np0005539508 systemd[1]: libpod-conmon-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope: Deactivated successfully.
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:19:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:19:42 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 29 01:19:42 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 29 01:19:42 np0005539508 python3[90572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 01:19:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 01:19:42 np0005539508 podman[90573]: 2025-11-29 06:19:42.660747299 +0000 UTC m=+0.026502736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:42 np0005539508 podman[90573]: 2025-11-29 06:19:42.870041157 +0000 UTC m=+0.235796514 container create 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:19:43 np0005539508 systemd[1]: Started libpod-conmon-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope.
Nov 29 01:19:43 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 01:19:43 np0005539508 ceph-mon[74654]: Deploying daemon osd.2 on compute-2
Nov 29 01:19:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:43 np0005539508 podman[90573]: 2025-11-29 06:19:43.252281357 +0000 UTC m=+0.618036694 container init 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:43 np0005539508 podman[90573]: 2025-11-29 06:19:43.262616023 +0000 UTC m=+0.628371370 container start 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:19:43 np0005539508 podman[90573]: 2025-11-29 06:19:43.266865179 +0000 UTC m=+0.632620506 container attach 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 01:19:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264614796' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 01:19:43 np0005539508 great_galois[90589]: 
Nov 29 01:19:43 np0005539508 great_galois[90589]: {"epoch":3,"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","modified":"2025-11-29T06:19:24.108161Z","created":"2025-11-29T06:16:01.724679Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 29 01:19:43 np0005539508 great_galois[90589]: dumped monmap epoch 3
Nov 29 01:19:43 np0005539508 systemd[1]: libpod-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope: Deactivated successfully.
Nov 29 01:19:43 np0005539508 podman[90573]: 2025-11-29 06:19:43.899822424 +0000 UTC m=+1.265577741 container died 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:43 np0005539508 systemd[1]: var-lib-containers-storage-overlay-48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85-merged.mount: Deactivated successfully.
Nov 29 01:19:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v111: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:44 np0005539508 podman[90573]: 2025-11-29 06:19:44.125978711 +0000 UTC m=+1.491734048 container remove 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 01:19:44 np0005539508 systemd[1]: libpod-conmon-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope: Deactivated successfully.
Nov 29 01:19:44 np0005539508 python3[90651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:44 np0005539508 podman[90652]: 2025-11-29 06:19:44.973406477 +0000 UTC m=+0.091247823 container create 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:44.920012516 +0000 UTC m=+0.037853892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:45 np0005539508 systemd[1]: Started libpod-conmon-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope.
Nov 29 01:19:45 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:45.106310563 +0000 UTC m=+0.224151989 container init 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:45.115392852 +0000 UTC m=+0.233234238 container start 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:45.119204115 +0000 UTC m=+0.237045501 container attach 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 01:19:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2969688060' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 01:19:45 np0005539508 clever_raman[90667]: [client.openstack]
Nov 29 01:19:45 np0005539508 clever_raman[90667]: #011key = AQCBjyppAAAAABAAXQRTF6pnk4WV7TfvJo0Mjg==
Nov 29 01:19:45 np0005539508 clever_raman[90667]: #011caps mgr = "allow *"
Nov 29 01:19:45 np0005539508 clever_raman[90667]: #011caps mon = "profile rbd"
Nov 29 01:19:45 np0005539508 clever_raman[90667]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 01:19:45 np0005539508 systemd[1]: libpod-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope: Deactivated successfully.
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:45.803018216 +0000 UTC m=+0.920859602 container died 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:45 np0005539508 systemd[1]: var-lib-containers-storage-overlay-257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2-merged.mount: Deactivated successfully.
Nov 29 01:19:45 np0005539508 podman[90652]: 2025-11-29 06:19:45.853269914 +0000 UTC m=+0.971111260 container remove 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:45 np0005539508 systemd[1]: libpod-conmon-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope: Deactivated successfully.
Nov 29 01:19:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v112: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:46 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 01:19:46 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/2969688060' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:47 np0005539508 ansible-async_wrapper.py[90851]: Invoked with j985933889021 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397186.9534473-37554-136998890228823/AnsiballZ_command.py _
Nov 29 01:19:47 np0005539508 ansible-async_wrapper.py[90854]: Starting module and watcher
Nov 29 01:19:47 np0005539508 ansible-async_wrapper.py[90854]: Start watching 90855 (30)
Nov 29 01:19:47 np0005539508 ansible-async_wrapper.py[90855]: Start module (90855)
Nov 29 01:19:47 np0005539508 ansible-async_wrapper.py[90851]: Return async_wrapper task started.
Nov 29 01:19:47 np0005539508 python3[90856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:47 np0005539508 podman[90857]: 2025-11-29 06:19:47.841096941 +0000 UTC m=+0.066157710 container create c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:47 np0005539508 systemd[1]: Started libpod-conmon-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope.
Nov 29 01:19:47 np0005539508 podman[90857]: 2025-11-29 06:19:47.813199095 +0000 UTC m=+0.038259934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:47 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:47 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:47 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:47 np0005539508 podman[90857]: 2025-11-29 06:19:47.948371978 +0000 UTC m=+0.173432747 container init c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:19:47 np0005539508 podman[90857]: 2025-11-29 06:19:47.958939711 +0000 UTC m=+0.184000470 container start c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:47 np0005539508 podman[90857]: 2025-11-29 06:19:47.963076103 +0000 UTC m=+0.188136872 container attach c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 01:19:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v113: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 01:19:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 01:19:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:19:48 np0005539508 musing_volhard[90872]: 
Nov 29 01:19:48 np0005539508 musing_volhard[90872]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 01:19:48 np0005539508 systemd[1]: libpod-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope: Deactivated successfully.
Nov 29 01:19:48 np0005539508 podman[90857]: 2025-11-29 06:19:48.566144343 +0000 UTC m=+0.791205152 container died c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:48 np0005539508 systemd[1]: var-lib-containers-storage-overlay-975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e-merged.mount: Deactivated successfully.
Nov 29 01:19:48 np0005539508 podman[90857]: 2025-11-29 06:19:48.620273456 +0000 UTC m=+0.845334205 container remove c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:19:48 np0005539508 systemd[1]: libpod-conmon-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope: Deactivated successfully.
Nov 29 01:19:48 np0005539508 ansible-async_wrapper.py[90855]: Module complete (90855)
Nov 29 01:19:48 np0005539508 python3[90955]: ansible-ansible.legacy.async_status Invoked with jid=j985933889021.90851 mode=status _async_dir=/root/.ansible_async
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 01:19:49 np0005539508 python3[91004]: ansible-ansible.legacy.async_status Invoked with jid=j985933889021.90851 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:49 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: from='osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:49 np0005539508 python3[91030]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v115: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.030925692 +0000 UTC m=+0.066660365 container create a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 01:19:50 np0005539508 systemd[1]: Started libpod-conmon-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope.
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.009700724 +0000 UTC m=+0.045435437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:50 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:50 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:50 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.127613256 +0000 UTC m=+0.163347979 container init a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.135796488 +0000 UTC m=+0.171531171 container start a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.139288131 +0000 UTC m=+0.175022864 container attach a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ngsyhe started
Nov 29 01:19:50 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-2.ngsyhe 192.168.122.102:0/708817067; not ready for session (expect reconnect)
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: from='osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:50 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:19:50 np0005539508 sharp_mendeleev[91070]: 
Nov 29 01:19:50 np0005539508 sharp_mendeleev[91070]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 01:19:50 np0005539508 systemd[1]: libpod-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope: Deactivated successfully.
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.744245037 +0000 UTC m=+0.779979750 container died a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:50 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4-merged.mount: Deactivated successfully.
Nov 29 01:19:50 np0005539508 podman[91031]: 2025-11-29 06:19:50.793948969 +0000 UTC m=+0.829683672 container remove a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:19:50 np0005539508 systemd[1]: libpod-conmon-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope: Deactivated successfully.
Nov 29 01:19:51 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-2.ngsyhe 192.168.122.102:0/708817067; not ready for session (expect reconnect)
Nov 29 01:19:51 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 01:19:51 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 01:19:51 np0005539508 python3[91289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:51 np0005539508 podman[91290]: 2025-11-29 06:19:51.779253787 +0000 UTC m=+0.060447181 container create 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:51 np0005539508 systemd[1]: Started libpod-conmon-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope.
Nov 29 01:19:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:51 np0005539508 podman[91290]: 2025-11-29 06:19:51.758627417 +0000 UTC m=+0.039820841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:51 np0005539508 podman[91290]: 2025-11-29 06:19:51.871722976 +0000 UTC m=+0.152916470 container init 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:19:51 np0005539508 podman[91290]: 2025-11-29 06:19:51.882397452 +0000 UTC m=+0.163590886 container start 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:51 np0005539508 podman[91290]: 2025-11-29 06:19:51.886176144 +0000 UTC m=+0.167369578 container attach 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v116: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:52 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:52 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vxabpq(active, since 2m), standbys: compute-2.ngsyhe
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ngsyhe", "id": "compute-2.ngsyhe"} v 0) v1
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ngsyhe", "id": "compute-2.ngsyhe"}]: dispatch
Nov 29 01:19:52 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564597130s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237648010s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564558983s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237670898s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564597130s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237648010s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564517021s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237731934s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564764023s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.238029480s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564517021s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237731934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564764023s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.238029480s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=15.724649429s) [] r=-1 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active pruub 92.398216248s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564558983s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=12.555711746s) [] r=-1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 89.229385376s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=15.724649429s) [] r=-1 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=12.555711746s) [] r=-1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569536209s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.243316650s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569481850s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.243385315s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.563738823s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237670898s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569536209s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.243316650s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.563738823s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569481850s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.243385315s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:52 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:19:52 np0005539508 suspicious_haslett[91305]: 
Nov 29 01:19:52 np0005539508 suspicious_haslett[91305]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 01:19:52 np0005539508 systemd[1]: libpod-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope: Deactivated successfully.
Nov 29 01:19:52 np0005539508 podman[91290]: 2025-11-29 06:19:52.551060664 +0000 UTC m=+0.832254098 container died 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:19:52 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3-merged.mount: Deactivated successfully.
Nov 29 01:19:52 np0005539508 ansible-async_wrapper.py[90854]: Done in kid B.
Nov 29 01:19:52 np0005539508 podman[91290]: 2025-11-29 06:19:52.6012112 +0000 UTC m=+0.882404594 container remove 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 01:19:52 np0005539508 systemd[1]: libpod-conmon-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope: Deactivated successfully.
Nov 29 01:19:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.gaxpay started
Nov 29 01:19:52 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 01:19:53 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:53 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:53 np0005539508 python3[91368]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:53 np0005539508 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 01:19:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:53 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 01:19:53 np0005539508 podman[91369]: 2025-11-29 06:19:53.688405537 +0000 UTC m=+0.052143345 container create 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:19:53 np0005539508 systemd[1]: Started libpod-conmon-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope.
Nov 29 01:19:53 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:53 np0005539508 podman[91369]: 2025-11-29 06:19:53.673477465 +0000 UTC m=+0.037215293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:53 np0005539508 podman[91369]: 2025-11-29 06:19:53.780993899 +0000 UTC m=+0.144731747 container init 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:19:53 np0005539508 podman[91369]: 2025-11-29 06:19:53.787972956 +0000 UTC m=+0.151710804 container start 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:19:53 np0005539508 podman[91369]: 2025-11-29 06:19:53.791566942 +0000 UTC m=+0.155304750 container attach 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:19:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v118: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:19:54
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 01:19:54 np0005539508 amazing_grothendieck[91384]: 
Nov 29 01:19:54 np0005539508 amazing_grothendieck[91384]: [{"container_id": "47d65a8aff6f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.56%", "created": "2025-11-29T06:17:23.040678Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T06:17:23.103806Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.714127Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-11-29T06:17:22.908791Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@crash.compute-0", "version": "18.2.7"}, {"container_id": "4384fb97959c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.65%", "created": "2025-11-29T06:18:18.466170Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-11-29T06:18:18.510330Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114424Z", "memory_usage": 11785994, "ports": [], "service_name": "crash", "started": "2025-11-29T06:18:18.373501Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@crash.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-11-29T06:19:34.246990Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "crash", "status": 2, "status_desc": "starting"}, {"container_id": "6f81410254a7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "33.75%", "created": "2025-11-29T06:16:09.231591Z", "daemon_id": "compute-0.vxabpq", "daemon_name": "mgr.compute-0.vxabpq", "daemon_type": "mgr", "events": ["2025-11-29T06:17:28.682807Z daemon:mgr.compute-0.vxabpq [INFO] \"Reconfigured mgr.compute-0.vxabpq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.713992Z", "memory_usage": 548510105, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T06:16:09.091594Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mgr.compute-0.vxabpq", "version": "18.2.7"}, {"container_id": "a8b9f68ee8f2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "100.00%", "created": "2025-11-29T06:19:31.639791Z", "daemon_id": "compute-1.gaxpay", "daemon_name": "mgr.compute-1.gaxpay", "daemon_type": "mgr", "events": ["2025-11-29T06:19:31.709129Z daemon:mgr.compute-1.gaxpay [INFO] \"Deployed mgr.compute-1.gaxpay on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114673Z", "memory_usage": 484546969, "ports": [8765], "service_name": "mgr", "started": "2025-11-29T06:19:31.500255Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mgr.compute-1.gaxpay", "version": "18.2.7"}, {"daemon_id": "compute-2.ngsyhe", "daemon_name": "mgr.compute-2.ngsyhe", "daemon_type": "mgr", "events": ["2025-11-29T06:19:29.510673Z daemon:mgr.compute-2.ngsyhe [INFO] \"Deployed mgr.compute-2.ngsyhe on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "c3c8680245c6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.49%", "created": "2025-11-29T06:16:03.846438Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T06:17:27.545002Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.713776Z", "memory_request": 2147483648, "memory_usage": 35316039, "ports": [], "service_name": "mon", "started": "2025-11-29T06:16:06.829437Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0", "version": "18.2.7"}, {"container_id": "6c6562254e3e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.69%", "created": "2025-11-29T06:19:21.742553Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-11-29T06:19:23.913168Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114609Z", "memory_request": 2147483648, "memory_usage": 28280094, "ports": [], "service_name": "mon", "started": "2025-11-29T06:19:21.606193Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-11-29T06:19:18.671495Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "aaeeb4acbe44", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.17%", "created": "2025-11-29T06:18:33.440691Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T06:18:33.850035Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "
Nov 29 01:19:54 np0005539508 systemd[1]: libpod-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope: Deactivated successfully.
Nov 29 01:19:54 np0005539508 podman[91369]: 2025-11-29 06:19:54.311479149 +0000 UTC m=+0.675216997 container died 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:19:54 np0005539508 systemd[1]: var-lib-containers-storage-overlay-35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277-merged.mount: Deactivated successfully.
Nov 29 01:19:54 np0005539508 podman[91369]: 2025-11-29 06:19:54.462049108 +0000 UTC m=+0.825786956 container remove 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 29 01:19:54 np0005539508 systemd[1]: libpod-conmon-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope: Deactivated successfully.
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:19:54 np0005539508 rsyslogd[1007]: message too long (9871) with configured size 8096, begin of message is: [{"container_id": "47d65a8aff6f", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.009242174413735343 quantized to 1 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 01:19:54 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:19:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 01:19:55 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:55 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:55 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 3m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 01:19:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.gaxpay", "id": "compute-1.gaxpay"} v 0) v1
Nov 29 01:19:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-1.gaxpay", "id": "compute-1.gaxpay"}]: dispatch
Nov 29 01:19:55 np0005539508 python3[91444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:55 np0005539508 podman[91445]: 2025-11-29 06:19:55.57748845 +0000 UTC m=+0.028154464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:55 np0005539508 podman[91445]: 2025-11-29 06:19:55.707537641 +0000 UTC m=+0.158203665 container create 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 01:19:55 np0005539508 systemd[1]: Started libpod-conmon-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope.
Nov 29 01:19:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v119: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:56 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:56 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:56 np0005539508 podman[91445]: 2025-11-29 06:19:56.117955386 +0000 UTC m=+0.568621450 container init 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:19:56 np0005539508 podman[91445]: 2025-11-29 06:19:56.128627452 +0000 UTC m=+0.579293466 container start 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:56 np0005539508 podman[91445]: 2025-11-29 06:19:56.341029552 +0000 UTC m=+0.791695616 container attach 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:56 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:19:56 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 01:19:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274267034' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 01:19:56 np0005539508 condescending_brahmagupta[91460]: 
Nov 29 01:19:56 np0005539508 condescending_brahmagupta[91460]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":27,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":34,"num_osds":3,"num_up_osds":2,"osd_up_since":1764397129,"num_in_osds":3,"osd_in_since":1764397176,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56037376,"bytes_avail":14967959552,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-29T06:19:51.975610+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 01:19:56 np0005539508 systemd[1]: libpod-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope: Deactivated successfully.
Nov 29 01:19:56 np0005539508 podman[91445]: 2025-11-29 06:19:56.816156923 +0000 UTC m=+1.266822907 container died 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:19:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b-merged.mount: Deactivated successfully.
Nov 29 01:19:57 np0005539508 podman[91445]: 2025-11-29 06:19:57.03758012 +0000 UTC m=+1.488246094 container remove 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:57 np0005539508 systemd[1]: libpod-conmon-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope: Deactivated successfully.
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:57 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:57 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 01:19:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v121: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:19:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:19:58 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:58 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:58 np0005539508 python3[91523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:58 np0005539508 podman[91524]: 2025-11-29 06:19:58.728520695 +0000 UTC m=+0.112781741 container create 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:58 np0005539508 podman[91524]: 2025-11-29 06:19:58.653239756 +0000 UTC m=+0.037500882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:19:58 np0005539508 systemd[1]: Started libpod-conmon-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope.
Nov 29 01:19:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:19:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:19:58 np0005539508 podman[91524]: 2025-11-29 06:19:58.897386866 +0000 UTC m=+0.281647922 container init 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:19:58 np0005539508 podman[91524]: 2025-11-29 06:19:58.904349853 +0000 UTC m=+0.288610889 container start 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:19:58 np0005539508 podman[91524]: 2025-11-29 06:19:58.93263068 +0000 UTC m=+0.316891746 container attach 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:19:59 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:59 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:19:59 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:19:59 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162770432' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 01:19:59 np0005539508 upbeat_engelbart[91539]: 
Nov 29 01:19:59 np0005539508 upbeat_engelbart[91539]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Nov 29 01:19:59 np0005539508 systemd[1]: libpod-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope: Deactivated successfully.
Nov 29 01:19:59 np0005539508 podman[91524]: 2025-11-29 06:19:59.435281246 +0000 UTC m=+0.819542342 container died 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:19:59 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec-merged.mount: Deactivated successfully.
Nov 29 01:19:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v123: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:19:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Nov 29 01:20:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 01:20:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Nov 29 01:20:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Nov 29 01:20:00 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:00 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:00 np0005539508 podman[91524]: 2025-11-29 06:20:00.21678386 +0000 UTC m=+1.601044936 container remove 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 01:20:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 01:20:00 np0005539508 systemd[1]: libpod-conmon-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope: Deactivated successfully.
Nov 29 01:20:01 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:01 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:01 np0005539508 python3[91601]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:20:01 np0005539508 podman[91602]: 2025-11-29 06:20:01.390569571 +0000 UTC m=+0.116990785 container create e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:20:01 np0005539508 podman[91602]: 2025-11-29 06:20:01.299408021 +0000 UTC m=+0.025829275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:20:01 np0005539508 systemd[1]: Started libpod-conmon-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope.
Nov 29 01:20:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:01 np0005539508 podman[91602]: 2025-11-29 06:20:01.519347775 +0000 UTC m=+0.245769019 container init e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 01:20:01 np0005539508 podman[91602]: 2025-11-29 06:20:01.524693923 +0000 UTC m=+0.251115127 container start e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:01 np0005539508 podman[91602]: 2025-11-29 06:20:01.56712864 +0000 UTC m=+0.293549894 container attach e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:01 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:01 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 01:20:01 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=36 pruub=6.332367420s) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:01 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36 pruub=9.596056938s) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active pruub 95.661926270s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:01 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36 pruub=9.596056938s) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown pruub 95.661926270s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:01 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=36 pruub=6.332367420s) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v125: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:02 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:02 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618548784' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 01:20:02 np0005539508 compassionate_yonath[91617]: mimic
Nov 29 01:20:02 np0005539508 systemd[1]: libpod-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope: Deactivated successfully.
Nov 29 01:20:02 np0005539508 podman[91642]: 2025-11-29 06:20:02.220636272 +0000 UTC m=+0.043236141 container died e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 01:20:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d-merged.mount: Deactivated successfully.
Nov 29 01:20:03 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:03 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 01:20:03 np0005539508 ceph-mon[74654]:    fs cephfs is offline because no MDS is active for it.
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 01:20:03 np0005539508 ceph-mon[74654]:    fs cephfs has 0 MDS online, but wants 1
Nov 29 01:20:03 np0005539508 podman[91642]: 2025-11-29 06:20:03.748622453 +0000 UTC m=+1.571222322 container remove e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:20:03 np0005539508 systemd[1]: libpod-conmon-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope: Deactivated successfully.
Nov 29 01:20:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v126: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:04 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:04 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:04 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:04 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.3( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.6( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.16( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=0.587849140s) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.11( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1e( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1c( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=0.587849140s) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.16( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.0( empty local-lis/les=36/37 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:04 np0005539508 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 01:20:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 01:20:04 np0005539508 python3[91682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:20:04 np0005539508 podman[91683]: 2025-11-29 06:20:04.843021213 +0000 UTC m=+0.043053716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:20:04 np0005539508 podman[91683]: 2025-11-29 06:20:04.935002557 +0000 UTC m=+0.135035020 container create 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:20:04 np0005539508 systemd[1]: Started libpod-conmon-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope.
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:20:05 np0005539508 podman[91683]: 2025-11-29 06:20:05.109413992 +0000 UTC m=+0.309446535 container init 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:05 np0005539508 podman[91683]: 2025-11-29 06:20:05.120414188 +0000 UTC m=+0.320446641 container start 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:05 np0005539508 podman[91683]: 2025-11-29 06:20:05.151769577 +0000 UTC m=+0.351802050 container attach 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 01:20:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 01:20:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247558833' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 01:20:05 np0005539508 nervous_gauss[91699]: 
Nov 29 01:20:05 np0005539508 nervous_gauss[91699]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":8}}
Nov 29 01:20:05 np0005539508 systemd[1]: libpod-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope: Deactivated successfully.
Nov 29 01:20:05 np0005539508 podman[91683]: 2025-11-29 06:20:05.788256545 +0000 UTC m=+0.988289028 container died 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:20:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9-merged.mount: Deactivated successfully.
Nov 29 01:20:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v128: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Nov 29 01:20:06 np0005539508 podman[91683]: 2025-11-29 06:20:06.069645838 +0000 UTC m=+1.269678291 container remove 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:20:06 np0005539508 systemd[1]: libpod-conmon-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope: Deactivated successfully.
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 4 seconds
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 2 seconds
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 01:20:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:06 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 01:20:07 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:07 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v130: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:08 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:08 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:20:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:20:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:20:09 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:09 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:09 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 11 completed events
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:20:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v131: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:10 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:10 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:10 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 29 01:20:10 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 29 01:20:11 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v132: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:12 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 01:20:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 01:20:13 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:13 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 01:20:13 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v133: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:14 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 01:20:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:14 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Nov 29 01:20:14 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:20:15 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:15 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 01:20:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v135: 146 pgs: 77 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: OSD bench result of 1381.921175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 39 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.360642433s) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active pruub 108.834419250s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 39 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.360642433s) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown pruub 108.834419250s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:16 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:20:16 np0005539508 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 01:20:17 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518] boot
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 108 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 01:20:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518] boot
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 7 peering, 108 unknown, 62 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 24 peering, 93 unknown, 60 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:27 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 29 01:20:27 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 29 01:20:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.f( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.0( empty local-lis/les=39/41 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.9( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:30 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev eaf0f8c9-d8ab-4004-a696-5edb2077dc20 does not exist
Nov 29 01:20:30 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev ef8e146f-5a62-4e89-b1d5-d6820051da58 does not exist
Nov 29 01:20:30 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 07963277-4d37-4e82-ae54-b1888f5688ae does not exist
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:30 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Nov 29 01:20:30 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Nov 29 01:20:30 np0005539508 podman[92728]: 2025-11-29 06:20:30.72867326 +0000 UTC m=+0.022319771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:30 np0005539508 podman[92728]: 2025-11-29 06:20:30.926812388 +0000 UTC m=+0.220458849 container create db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:31 np0005539508 systemd[1]: Started libpod-conmon-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope.
Nov 29 01:20:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:31 np0005539508 podman[92728]: 2025-11-29 06:20:31.380264767 +0000 UTC m=+0.673911228 container init db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:31 np0005539508 podman[92728]: 2025-11-29 06:20:31.396736855 +0000 UTC m=+0.690383316 container start db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:20:31 np0005539508 affectionate_murdock[92744]: 167 167
Nov 29 01:20:31 np0005539508 systemd[1]: libpod-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope: Deactivated successfully.
Nov 29 01:20:31 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:20:31 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:31 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:31 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:20:31 np0005539508 podman[92728]: 2025-11-29 06:20:31.827388437 +0000 UTC m=+1.121034908 container attach db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:20:31 np0005539508 podman[92728]: 2025-11-29 06:20:31.828403578 +0000 UTC m=+1.122050059 container died db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:20:31 np0005539508 systemd[1]: var-lib-containers-storage-overlay-84adc087714af5b45bf20e8b466af974b82463bec579500c8de68f180e660495-merged.mount: Deactivated successfully.
Nov 29 01:20:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 78 peering, 99 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:32 np0005539508 podman[92728]: 2025-11-29 06:20:32.002842414 +0000 UTC m=+1.296488875 container remove db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:20:32 np0005539508 systemd[1]: libpod-conmon-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope: Deactivated successfully.
Nov 29 01:20:32 np0005539508 podman[92767]: 2025-11-29 06:20:32.245994524 +0000 UTC m=+0.112461051 container create 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:32 np0005539508 podman[92767]: 2025-11-29 06:20:32.170679514 +0000 UTC m=+0.037146091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:32 np0005539508 systemd[1]: Started libpod-conmon-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope.
Nov 29 01:20:32 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:32 np0005539508 podman[92767]: 2025-11-29 06:20:32.369060039 +0000 UTC m=+0.235526576 container init 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:20:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 29 01:20:32 np0005539508 podman[92767]: 2025-11-29 06:20:32.380807027 +0000 UTC m=+0.247273524 container start 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 29 01:20:32 np0005539508 podman[92767]: 2025-11-29 06:20:32.392083601 +0000 UTC m=+0.258550098 container attach 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:20:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:33 np0005539508 zen_borg[92783]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:20:33 np0005539508 zen_borg[92783]: --> relative data size: 1.0
Nov 29 01:20:33 np0005539508 zen_borg[92783]: --> All data devices are unavailable
Nov 29 01:20:33 np0005539508 systemd[1]: libpod-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope: Deactivated successfully.
Nov 29 01:20:33 np0005539508 podman[92767]: 2025-11-29 06:20:33.189091354 +0000 UTC m=+1.055557851 container died 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:20:33 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f-merged.mount: Deactivated successfully.
Nov 29 01:20:33 np0005539508 podman[92767]: 2025-11-29 06:20:33.260140578 +0000 UTC m=+1.126607145 container remove 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:20:33 np0005539508 systemd[1]: libpod-conmon-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope: Deactivated successfully.
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.890504015 +0000 UTC m=+0.066678061 container create 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:20:33 np0005539508 systemd[1]: Started libpod-conmon-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope.
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.849725051 +0000 UTC m=+0.025899147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.969672824 +0000 UTC m=+0.145846870 container init 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.976346317 +0000 UTC m=+0.152520363 container start 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:20:33 np0005539508 frosty_euclid[92968]: 167 167
Nov 29 01:20:33 np0005539508 systemd[1]: libpod-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope: Deactivated successfully.
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.979515028 +0000 UTC m=+0.155689104 container attach 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:20:33 np0005539508 podman[92950]: 2025-11-29 06:20:33.981758672 +0000 UTC m=+0.157932718 container died 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:20:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:20:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6c3c5c6400dce5fda5be41cbf4e046e0fa61a6f2d5c933e8b45a493fd2d5ce2f-merged.mount: Deactivated successfully.
Nov 29 01:20:34 np0005539508 podman[92950]: 2025-11-29 06:20:34.018768738 +0000 UTC m=+0.194942784 container remove 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:20:34 np0005539508 systemd[1]: libpod-conmon-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope: Deactivated successfully.
Nov 29 01:20:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 01:20:34 np0005539508 podman[92991]: 2025-11-29 06:20:34.191681876 +0000 UTC m=+0.049220158 container create e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:34 np0005539508 systemd[1]: Started libpod-conmon-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope.
Nov 29 01:20:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:34 np0005539508 podman[92991]: 2025-11-29 06:20:34.165475522 +0000 UTC m=+0.023013834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:34 np0005539508 podman[92991]: 2025-11-29 06:20:34.278581018 +0000 UTC m=+0.136119320 container init e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 01:20:34 np0005539508 podman[92991]: 2025-11-29 06:20:34.285018624 +0000 UTC m=+0.142556906 container start e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:20:34 np0005539508 podman[92991]: 2025-11-29 06:20:34.288829683 +0000 UTC m=+0.146367995 container attach e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:20:35 np0005539508 practical_davinci[93007]: {
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:    "1": [
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:        {
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "devices": [
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "/dev/loop3"
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            ],
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "lv_name": "ceph_lv0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "lv_size": "7511998464",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "name": "ceph_lv0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "tags": {
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.cluster_name": "ceph",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.crush_device_class": "",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.encrypted": "0",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.osd_id": "1",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.type": "block",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:                "ceph.vdo": "0"
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            },
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "type": "block",
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:            "vg_name": "ceph_vg0"
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:        }
Nov 29 01:20:35 np0005539508 practical_davinci[93007]:    ]
Nov 29 01:20:35 np0005539508 practical_davinci[93007]: }
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 01:20:35 np0005539508 systemd[1]: libpod-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope: Deactivated successfully.
Nov 29 01:20:35 np0005539508 podman[92991]: 2025-11-29 06:20:35.063773814 +0000 UTC m=+0.921312096 container died e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:20:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.011550903s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675140381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.110986710s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775024414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.110947609s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775024414s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.011586189s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675216675s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010724068s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675140381s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010685921s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675216675s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010158539s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675186157s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109983444s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775039673s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010080338s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675186157s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109938622s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775039673s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010134697s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675292969s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010087013s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675292969s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010012627s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675369263s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109688759s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775054932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010006905s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675384521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109664917s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775054932s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009965897s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675369263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009955406s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675384521s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009961128s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675552368s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009877205s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675552368s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010269165s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675262451s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009781837s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675582886s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009729385s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009824753s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675582886s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109353065s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775207520s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109261513s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775207520s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009334564s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675262451s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109200478s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775253296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109324455s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775314331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009631157s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109173775s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775253296s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109218597s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775314331s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009421349s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675613403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009397507s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675613403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009315491s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675613403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009278297s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675613403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009585381s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675949097s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108906746s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775283813s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009565353s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675949097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009493828s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675933838s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108855247s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775283813s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108835220s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775375366s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009438515s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675933838s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108816147s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775375366s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009423256s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675994873s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009394646s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675994873s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108706474s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775405884s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009249687s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676055908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108654976s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775405884s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009185791s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676055908s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009135246s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676010132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009145737s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676025391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009089470s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676010132s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009093285s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676025391s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009002686s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008987427s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008976936s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009074211s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008937836s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008885384s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:20:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2-merged.mount: Deactivated successfully.
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 podman[92991]: 2025-11-29 06:20:35.337185026 +0000 UTC m=+1.194723348 container remove e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:20:35 np0005539508 systemd[1]: libpod-conmon-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope: Deactivated successfully.
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 14 peering, 163 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.017510303 +0000 UTC m=+0.024005232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.279422454 +0000 UTC m=+0.285917383 container create 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:20:36 np0005539508 systemd[1]: Started libpod-conmon-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope.
Nov 29 01:20:36 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.637791132 +0000 UTC m=+0.644286051 container init 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.644527976 +0000 UTC m=+0.651022865 container start 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:20:36 np0005539508 happy_merkle[93185]: 167 167
Nov 29 01:20:36 np0005539508 systemd[1]: libpod-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope: Deactivated successfully.
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.649275233 +0000 UTC m=+0.655770142 container attach 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.650185099 +0000 UTC m=+0.656679988 container died 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-abf0fb127359f941f25557de3092731ae4976734341faa206ce7dd392f0a3941-merged.mount: Deactivated successfully.
Nov 29 01:20:36 np0005539508 podman[93169]: 2025-11-29 06:20:36.683073296 +0000 UTC m=+0.689568185 container remove 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:20:36 np0005539508 systemd[1]: libpod-conmon-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope: Deactivated successfully.
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 01:20:36 np0005539508 podman[93209]: 2025-11-29 06:20:36.826476295 +0000 UTC m=+0.045893413 container create ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 01:20:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.17( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.6( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:36 np0005539508 systemd[1]: Started libpod-conmon-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope.
Nov 29 01:20:36 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:36 np0005539508 podman[93209]: 2025-11-29 06:20:36.801565157 +0000 UTC m=+0.020982275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:36 np0005539508 podman[93209]: 2025-11-29 06:20:36.93192368 +0000 UTC m=+0.151340808 container init ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:20:36 np0005539508 podman[93209]: 2025-11-29 06:20:36.938384276 +0000 UTC m=+0.157801364 container start ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:20:36 np0005539508 podman[93209]: 2025-11-29 06:20:36.955550891 +0000 UTC m=+0.174967989 container attach ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:20:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:37 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 01:20:37 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]: {
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:        "osd_id": 1,
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:        "type": "bluestore"
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]:    }
Nov 29 01:20:37 np0005539508 ecstatic_gould[93225]: }
Nov 29 01:20:37 np0005539508 systemd[1]: libpod-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope: Deactivated successfully.
Nov 29 01:20:37 np0005539508 podman[93209]: 2025-11-29 06:20:37.75021946 +0000 UTC m=+0.969636538 container died ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 01:20:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 01:20:38 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34-merged.mount: Deactivated successfully.
Nov 29 01:20:38 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 01:20:38 np0005539508 podman[93209]: 2025-11-29 06:20:38.89225965 +0000 UTC m=+2.111676768 container remove ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:20:38 np0005539508 systemd[1]: libpod-conmon-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope: Deactivated successfully.
Nov 29 01:20:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:20:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:20:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:39 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 01:20:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 01:20:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 75 peering, 102 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 01:20:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:41 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 01:20:41 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 01:20:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 20 peering, 157 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:42 np0005539508 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 01:20:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Nov 29 01:20:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Nov 29 01:20:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:20:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 01:20:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:20:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 01:20:46 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 01:20:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 01:20:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 01:20:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 01:20:47 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 01:20:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v158: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:48 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:48 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 01:20:48 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:48 np0005539508 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 01:20:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 01:20:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 01:20:49 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 01:20:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 01:20:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 01:20:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v160: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 01:20:50 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:51 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 01:20:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 01:20:51 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 01:20:51 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 01:20:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v162: 179 pgs: 1 creating+peering, 178 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 705 B/s rd, 705 B/s wr, 1 op/s
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.144450804 +0000 UTC m=+0.093100752 container create 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.077649091 +0000 UTC m=+0.026299059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:52 np0005539508 systemd[1]: Started libpod-conmon-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope.
Nov 29 01:20:52 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.390577689 +0000 UTC m=+0.339227627 container init 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.399036993 +0000 UTC m=+0.347686901 container start 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:20:52 np0005539508 silly_blackwell[93424]: 167 167
Nov 29 01:20:52 np0005539508 systemd[1]: libpod-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope: Deactivated successfully.
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.422195669 +0000 UTC m=+0.370845617 container attach 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.422813917 +0000 UTC m=+0.371463825 container died 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:20:52 np0005539508 systemd[1]: var-lib-containers-storage-overlay-25189d67f06a1025f7e8d35e6bdc5f68bb356700a8f22a35d856f7e5c0092d66-merged.mount: Deactivated successfully.
Nov 29 01:20:52 np0005539508 podman[93408]: 2025-11-29 06:20:52.582273098 +0000 UTC m=+0.530923006 container remove 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:20:52 np0005539508 systemd[1]: libpod-conmon-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope: Deactivated successfully.
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.101:0/1253186838' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 01:20:52 np0005539508 systemd[1]: Reloading.
Nov 29 01:20:52 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:20:52 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:20:53 np0005539508 systemd[1]: Reloading.
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 01:20:53 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:20:53 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 01:20:53 np0005539508 systemd[1]: Starting Ceph rgw.rgw.compute-0.vmptkp for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:20:53 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 01:20:53 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 01:20:53 np0005539508 podman[93569]: 2025-11-29 06:20:53.58298532 +0000 UTC m=+0.082491136 container create 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:20:53 np0005539508 podman[93569]: 2025-11-29 06:20:53.524748513 +0000 UTC m=+0.024254349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:20:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:53 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.vmptkp supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:53 np0005539508 podman[93569]: 2025-11-29 06:20:53.771307102 +0000 UTC m=+0.270812928 container init 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:20:53 np0005539508 podman[93569]: 2025-11-29 06:20:53.778834378 +0000 UTC m=+0.278340184 container start 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:20:53 np0005539508 bash[93569]: 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d
Nov 29 01:20:53 np0005539508 systemd[1]: Started Ceph rgw.rgw.compute-0.vmptkp for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:20:53 np0005539508 radosgw[93592]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:20:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:20:53 np0005539508 radosgw[93592]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 01:20:53 np0005539508 radosgw[93592]: framework: beast
Nov 29 01:20:53 np0005539508 radosgw[93592]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 01:20:53 np0005539508 radosgw[93592]: init_numa not setting numa affinity
Nov 29 01:20:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v165: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 841 B/s rd, 841 B/s wr, 1 op/s
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:20:54
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.005556) are unknown; try again later
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:20:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:20:54 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 01:20:54 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:20:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 01:20:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 01:20:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v167: 181 pgs: 1 creating+peering, 1 unknown, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 402 B/s wr, 4 op/s
Nov 29 01:20:56 np0005539508 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:20:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:20:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 01:20:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 01:20:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 01:20:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.101:0/111233770' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3)) in 18 seconds
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:20:57 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 01:20:57 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 12 completed events
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:20:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 346 B/s wr, 4 op/s
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/2594248517' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.101:0/111233770' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 01:20:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:20:58 np0005539508 python3[93692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:20:58 np0005539508 podman[93693]: 2025-11-29 06:20:58.60814074 +0000 UTC m=+0.068187165 container create fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:20:58 np0005539508 systemd[1]: Started libpod-conmon-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope.
Nov 29 01:20:58 np0005539508 podman[93693]: 2025-11-29 06:20:58.566456159 +0000 UTC m=+0.026502584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:20:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:20:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:20:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:20:59 np0005539508 podman[93693]: 2025-11-29 06:20:59.428704154 +0000 UTC m=+0.888750659 container init fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 01:20:59 np0005539508 podman[93693]: 2025-11-29 06:20:59.440517564 +0000 UTC m=+0.900564009 container start fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:20:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 01:20:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:20:59 np0005539508 podman[93693]: 2025-11-29 06:20:59.84671987 +0000 UTC m=+1.306766325 container attach fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:20:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:21:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v171: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 new map
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:19:35.589013+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.gxdwyy{-1:24145} state up:standby seq 1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: from='client.? 192.168.122.102:0/2594248517' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:boot
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] as mds.0
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gxdwyy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 01:21:00 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.gxdwyy v2:192.168.122.102:6804/1811763433; not ready for session (expect reconnect)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.gxdwyy"} v 0) v1
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gxdwyy"}]: dispatch
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 01:21:00 np0005539508 friendly_panini[93709]: could not fetch user info: no user info saved
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e4 new map
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:00.645745+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:creating seq 1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:creating}
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gxdwyy is now active in filesystem cephfs as rank 0
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:21:00 np0005539508 systemd[1]: libpod-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope: Deactivated successfully.
Nov 29 01:21:00 np0005539508 podman[93693]: 2025-11-29 06:21:00.747117353 +0000 UTC m=+2.207163778 container died fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:21:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:21:00 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 01:21:00 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 01:21:00 np0005539508 systemd[1]: var-lib-containers-storage-overlay-92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31-merged.mount: Deactivated successfully.
Nov 29 01:21:00 np0005539508 podman[93693]: 2025-11-29 06:21:00.806093441 +0000 UTC m=+2.266139866 container remove fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:21:00 np0005539508 systemd[1]: libpod-conmon-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope: Deactivated successfully.
Nov 29 01:21:00 np0005539508 radosgw[93592]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 01:21:00 np0005539508 radosgw[93592]: framework: beast
Nov 29 01:21:00 np0005539508 radosgw[93592]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 01:21:00 np0005539508 radosgw[93592]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp[93585]: 2025-11-29T06:21:00.840+0000 7f7db64b5940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 radosgw[93592]: starting handler: beast
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 radosgw[93592]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 radosgw[93592]: mgrc service_daemon_register rgw.14361 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.vmptkp,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=916ce3c8-b215-47fd-909b-03c5b552b52f,zone_name=default,zonegroup_id=a7fe8251-a74c-4f06-a680-d530d14bb192,zonegroup_name=default}
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 01:21:00 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 01:21:01 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 01:21:01 np0005539508 python3[94460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:21:01 np0005539508 podman[94474]: 2025-11-29 06:21:01.238527151 +0000 UTC m=+0.072278892 container create 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:21:01 np0005539508 podman[94474]: 2025-11-29 06:21:01.204372448 +0000 UTC m=+0.038124099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 01:21:01 np0005539508 systemd[1]: Started libpod-conmon-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope.
Nov 29 01:21:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:21:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:01 np0005539508 podman[94474]: 2025-11-29 06:21:01.525423311 +0000 UTC m=+0.359174992 container init 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 01:21:01 np0005539508 podman[94474]: 2025-11-29 06:21:01.56045812 +0000 UTC m=+0.394209761 container start 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:21:01 np0005539508 podman[94474]: 2025-11-29 06:21:01.948539093 +0000 UTC m=+0.782290814 container attach 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:21:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v173: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 3.8 KiB/s wr, 13 op/s
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.011163596 +0000 UTC m=+0.032888098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:21:02 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event f0af229f-db58-4777-9300-7823e92993ef (Global Recovery Event) in 58 seconds
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: daemon mds.cephfs.compute-2.gxdwyy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: Cluster is now healthy
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: daemon mds.cephfs.compute-2.gxdwyy is now active in filesystem cephfs as rank 0
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.384153935 +0000 UTC m=+0.405878437 container create 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:21:02 np0005539508 systemd[1]: Started libpod-conmon-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope.
Nov 29 01:21:02 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.661769638 +0000 UTC m=+0.683494110 container init 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:21:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.67157971 +0000 UTC m=+0.693304172 container start 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:21:02 np0005539508 nervous_mcclintock[94615]: 167 167
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.676981216 +0000 UTC m=+0.698705678 container attach 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:21:02 np0005539508 systemd[1]: libpod-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope: Deactivated successfully.
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.678120909 +0000 UTC m=+0.699845411 container died 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]: {
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "user_id": "openstack",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "display_name": "openstack",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "email": "",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "suspended": 0,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "max_buckets": 1000,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "subusers": [],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "keys": [
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        {
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:            "user": "openstack",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:            "access_key": "R6E8YK4W4T3CTN23FBKD",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:            "secret_key": "y5AKHfabfxYBgWBxC6rwwMHQuHvZBwkmJTopzDw5"
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        }
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    ],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "swift_keys": [],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "caps": [],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "op_mask": "read, write, delete",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "default_placement": "",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "default_storage_class": "",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "placement_tags": [],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "bucket_quota": {
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "enabled": false,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "check_on_raw": false,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_size": -1,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_size_kb": 0,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_objects": -1
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    },
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "user_quota": {
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "enabled": false,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "check_on_raw": false,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_size": -1,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_size_kb": 0,
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:        "max_objects": -1
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    },
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "temp_url_keys": [],
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "type": "rgw",
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]:    "mfa_ids": []
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]: }
Nov 29 01:21:02 np0005539508 jovial_lalande[94498]: 
Nov 29 01:21:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c47be74bf5cdd88d44d52b523ef1ba9f2782e219fdbe17e185710c63fdbfadf5-merged.mount: Deactivated successfully.
Nov 29 01:21:02 np0005539508 podman[94558]: 2025-11-29 06:21:02.723106934 +0000 UTC m=+0.744831426 container remove 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:21:02 np0005539508 systemd[1]: libpod-conmon-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope: Deactivated successfully.
Nov 29 01:21:02 np0005539508 systemd[1]: libpod-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope: Deactivated successfully.
Nov 29 01:21:02 np0005539508 podman[94474]: 2025-11-29 06:21:02.767893794 +0000 UTC m=+1.601645435 container died 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:21:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9-merged.mount: Deactivated successfully.
Nov 29 01:21:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:21:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:21:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:21:02 np0005539508 podman[94474]: 2025-11-29 06:21:02.969456266 +0000 UTC m=+1.803207897 container remove 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:21:03 np0005539508 systemd[1]: libpod-conmon-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope: Deactivated successfully.
Nov 29 01:21:03 np0005539508 systemd[1]: Reloading.
Nov 29 01:21:03 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:21:03 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:21:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e5 new map
Nov 29 01:21:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:01.949294+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 01:21:03 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 01:21:03 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active}
Nov 29 01:21:03 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 29 01:21:03 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 29 01:21:03 np0005539508 systemd[1]: Starting Ceph mds.cephfs.compute-0.jzycnf for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:21:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v174: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.0 KiB/s rd, 3.3 KiB/s wr, 11 op/s
Nov 29 01:21:04 np0005539508 podman[94791]: 2025-11-29 06:21:03.912716504 +0000 UTC m=+0.024891758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:21:04 np0005539508 podman[94791]: 2025-11-29 06:21:04.230774511 +0000 UTC m=+0.342949685 container create 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:21:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jzycnf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 01:21:04 np0005539508 podman[94791]: 2025-11-29 06:21:04.664809817 +0000 UTC m=+0.776985091 container init 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:21:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 01:21:04 np0005539508 podman[94791]: 2025-11-29 06:21:04.675843045 +0000 UTC m=+0.788018259 container start 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:21:04 np0005539508 ceph-mds[94810]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 01:21:04 np0005539508 ceph-mds[94810]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 01:21:04 np0005539508 ceph-mds[94810]: main not setting numa affinity
Nov 29 01:21:04 np0005539508 ceph-mds[94810]: pidfile_write: ignore empty --pid-file
Nov 29 01:21:04 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf[94806]: starting mds.cephfs.compute-0.jzycnf at 
Nov 29 01:21:04 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 5 from mon.0
Nov 29 01:21:05 np0005539508 bash[94791]: 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b
Nov 29 01:21:05 np0005539508 systemd[1]: Started Ceph mds.cephfs.compute-0.jzycnf for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:21:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:21:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 new map
Nov 29 01:21:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:01.949294+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:05 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 6 from mon.0
Nov 29 01:21:05 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Monitors have assigned me to become a standby.
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 1)
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:boot
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jzycnf"} v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jzycnf"}]: dispatch
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 all = 0
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e7 new map
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:01.949294+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 01:21:06 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 01:21:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 29 01:21:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 29 01:21:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 01:21:07 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 01:21:07 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 13 completed events
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v177: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 01:21:08 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:08 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=45/46 n=4 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.837022781s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 46'3 active pruub 164.471450806s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:08 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.837022781s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 unknown pruub 164.471450806s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:21:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 01:21:09 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 01:21:09 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 01:21:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v179: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 121 KiB/s rd, 2.7 KiB/s wr, 209 op/s
Nov 29 01:21:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 01:21:11 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 01:21:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 01:21:11 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v181: 212 pgs: 1 peering, 31 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 new map
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:01.949294+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: mgr.server handle_open ignoring open from mds.cephfs.compute-1.vlqnad v2:192.168.122.101:6804/3552238207; not ready for session (expect reconnect)
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:boot
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vlqnad"} v 0) v1
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vlqnad"}]: dispatch
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 all = 0
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32) in 6 seconds
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 5 seconds
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 17 completed events
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 58 pg[9.0( v 56'1130 (0'0,56'1130] local-lis/les=47/48 n=177 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=10.116784096s) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 56'1129 active pruub 167.403060913s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 58 pg[9.0( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=10.116784096s) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 0'0 unknown pruub 167.403060913s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 01:21:12 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3)) in 16 seconds
Nov 29 01:21:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 01:21:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 01:21:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v183: 274 pgs: 1 peering, 93 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.19( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.3( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.e( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.8( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.b( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.17( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.12( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.10( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.2( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1e( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.18( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1b( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.9( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.a( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.14( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.d( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.c( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.7( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.6( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.15( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.5( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.4( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1a( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1c( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1d( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.13( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.11( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.16( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.0( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-mgr[74948]: [progress INFO root] update: starting ev 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.2( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.14( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.c( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1c( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.4( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Nov 29 01:21:14 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:14 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 01:21:14 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 01:21:15 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 01:21:15 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 01:21:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 01:21:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v185: 274 pgs: 31 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 196 op/s
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 01:21:16 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 01:21:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 60 pg[11.0( v 54'2 (0'0,54'2] local-lis/les=51/52 n=2 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=12.091829300s) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 54'1 active pruub 173.492294312s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 60 pg[11.0( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=12.091829300s) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 0'0 unknown pruub 173.492294312s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e9 new map
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:17.214295+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:17 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 9 from mon.0
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:standby
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 01:21:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 01:21:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.14( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.13( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.11( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1f( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1e( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1d( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.6( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.7( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.3( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.4( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.18( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.17( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.5( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.d( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.e( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.f( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.8( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.16( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.19( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1a( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1c( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.12( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.10( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.b( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.9( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.a( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.c( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.2( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.15( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1b( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.11( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.6( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.18( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.0( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.10( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.9( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.2( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.15( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 01:21:18 np0005539508 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Nov 29 01:21:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 01:21:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 01:21:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:20 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 01:21:20 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 01:21:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 new map
Nov 29 01:21:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T06:19:35.588785+0000#012modified#0112025-11-29T06:21:17.214295+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24145}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 01:21:21 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:standby
Nov 29 01:21:21 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:21:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:22 np0005539508 podman[94970]: 2025-11-29 06:21:22.412008935 +0000 UTC m=+6.933384470 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 01:21:22 np0005539508 podman[94970]: 2025-11-29 06:21:22.508248496 +0000 UTC m=+7.029624051 container create 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:22 np0005539508 systemd[1]: Started libpod-conmon-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope.
Nov 29 01:21:22 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:21:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:22 np0005539508 podman[94970]: 2025-11-29 06:21:22.808276284 +0000 UTC m=+7.329651809 container init 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:22 np0005539508 podman[94970]: 2025-11-29 06:21:22.814909615 +0000 UTC m=+7.336285120 container start 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:22 np0005539508 practical_tharp[95087]: 0 0
Nov 29 01:21:22 np0005539508 systemd[1]: libpod-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope: Deactivated successfully.
Nov 29 01:21:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 01:21:23 np0005539508 podman[94970]: 2025-11-29 06:21:23.112766741 +0000 UTC m=+7.634142256 container attach 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:23 np0005539508 podman[94970]: 2025-11-29 06:21:23.113336847 +0000 UTC m=+7.634712362 container died 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 01:21:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b9e4a4dd1e8a1078d14faa469a4e1812dd7eff194bd2830655f303ec22147d3e-merged.mount: Deactivated successfully.
Nov 29 01:21:23 np0005539508 podman[94970]: 2025-11-29 06:21:23.354194602 +0000 UTC m=+7.875570107 container remove 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 01:21:23 np0005539508 systemd[1]: libpod-conmon-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope: Deactivated successfully.
Nov 29 01:21:23 np0005539508 systemd[1]: Reloading.
Nov 29 01:21:23 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:21:23 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:21:23 np0005539508 systemd[1]: Reloading.
Nov 29 01:21:23 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 01:21:23 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 01:21:23 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:21:23 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:21:23 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event e513d348-4646-4037-8f31-89368481c0d1 (Global Recovery Event) in 5 seconds
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 01:21:24 np0005539508 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.zzbnoj for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.18( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.5( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422176361s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.078231812s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370802879s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026885986s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422135353s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.078231812s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370767593s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026885986s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422014236s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.078262329s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370596886s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026870728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.421978951s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.078262329s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370497704s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026870728s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370359421s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026809692s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370335579s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026809692s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370198250s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026779175s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370179176s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026779175s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.439199448s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095840454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.439179420s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095840454s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369860649s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026718140s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438863754s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095718384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369842529s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026718140s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438841820s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095718384s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369688988s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026687622s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369671822s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026687622s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438580513s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095611572s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438559532s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095611572s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369668961s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026794434s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369653702s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026794434s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438618660s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095840454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438606262s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095840454s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438386917s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095748901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438371658s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095748901s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438152313s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095855713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438129425s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438166618s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095993042s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438115120s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095993042s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368515015s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026443481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368495941s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026443481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437891960s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095962524s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437859535s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095962524s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368103027s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026443481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437676430s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096008301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368072510s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026443481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437631607s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096008301s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367816925s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026412964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367795944s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026412964s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437351227s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096038818s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437318802s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096038818s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367474556s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026382446s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367448807s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026382446s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367616653s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026565552s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436944008s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095932007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367565155s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026565552s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367348671s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026367188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436909676s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095932007s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367545128s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026565552s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367314339s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026367188s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367407799s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026565552s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436842918s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096023560s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436819077s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096023560s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436736107s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096054077s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436726570s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096069336s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366975784s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026336670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436676979s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096069336s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366915703s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026336670s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436693192s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096054077s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366551399s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026214600s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366712570s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026412964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366530418s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026214600s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366686821s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026412964s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436373711s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436349869s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366664886s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026489258s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366647720s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026489258s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366159439s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366135597s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026153564s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366021156s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026092529s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365999222s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026092529s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436002731s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096176147s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435943604s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096176147s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435586929s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365346909s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.025955200s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365461349s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365249634s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.025955200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365444183s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026153564s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435533524s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365745544s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026702881s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365699768s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026702881s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.434968948s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096237183s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.434947014s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096237183s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.364606857s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.025955200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.364582062s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.025955200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:24 np0005539508 podman[95233]: 2025-11-29 06:21:24.433067513 +0000 UTC m=+0.100831413 container create f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:21:24 np0005539508 podman[95233]: 2025-11-29 06:21:24.36937771 +0000 UTC m=+0.037141600 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 01:21:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdcc739ed24461c8c577ea73c0480ca465a39cf95d639f924efb4e28e32a1b1d/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 01:21:24 np0005539508 podman[95233]: 2025-11-29 06:21:24.608421141 +0000 UTC m=+0.276185091 container init f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:21:24 np0005539508 podman[95233]: 2025-11-29 06:21:24.615399002 +0000 UTC m=+0.283162862 container start f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:21:24 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj[95248]: [NOTICE] 332/062124 (2) : New worker #1 (4) forked
Nov 29 01:21:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000058s ======
Nov 29 01:21:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Nov 29 01:21:24 np0005539508 bash[95233]: f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f
Nov 29 01:21:24 np0005539508 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.zzbnoj for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:21:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 systemd-logind[797]: New session 34 of user zuul.
Nov 29 01:21:25 np0005539508 systemd[1]: Started Session 34 of User zuul.
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.15( v 59'99 lc 54'78 (0'0,59'99] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=59'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.5( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.2( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.19( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.18( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.8( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.1b( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.13( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.14( v 59'99 lc 54'86 (0'0,59'99] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=59'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:21:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:26 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 01:21:26 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 01:21:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:21:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:21:27 np0005539508 python3.9[95417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:21:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 0 objects/s recovering
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 01:21:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 01:21:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 01:21:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:21:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:28.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:28 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:28 np0005539508 python3.9[95640]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 19 completed events
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:29 np0005539508 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,40 pgs not in active + clean state
Nov 29 01:21:29 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 01:21:29 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 01:21:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 40 peering, 265 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 145 B/s, 0 objects/s recovering
Nov 29 01:21:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:31 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 01:21:31 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 01:21:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 31 peering, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 111 B/s, 0 objects/s recovering
Nov 29 01:21:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 01:21:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 01:21:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 118 B/s, 0 objects/s recovering
Nov 29 01:21:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 01:21:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 01:21:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 01:21:34 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 26d17dde-91e9-46c1-94a3-4bff28b62117 (Global Recovery Event) in 5 seconds
Nov 29 01:21:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 01:21:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 01:21:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 01:21:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:21:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 01:21:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 01:21:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 01:21:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:21:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 01:21:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 01:21:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:39 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 20 completed events
Nov 29 01:21:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:21:39 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 01:21:39 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 01:21:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 01:21:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 01:21:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616820335s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.191085815s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616754532s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.191085815s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616343498s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190933228s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616616249s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616241455s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190933228s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616504669s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616054535s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190872192s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616014481s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.615541458s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190856934s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.615483284s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614504814s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190017700s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614473343s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190017700s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.613969803s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.189636230s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.613945961s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.189636230s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614059448s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.189956665s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614003181s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.189956665s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:21:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:21:41 np0005539508 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 01:21:41 np0005539508 systemd[1]: session-34.scope: Consumed 9.395s CPU time.
Nov 29 01:21:41 np0005539508 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Nov 29 01:21:41 np0005539508 systemd-logind[797]: Removed session 34.
Nov 29 01:21:41 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:21:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 01:21:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 01:21:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:21:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:43 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 01:21:43 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 01:21:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:44 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 01:21:44 np0005539508 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Nov 29 01:21:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 01:21:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:45.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:45 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 01:21:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:46 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 01:21:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:46 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 01:21:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Nov 29 01:21:47 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 01:21:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 01:21:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:21:49 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 01:21:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 01:21:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 01:21:49 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 01:21:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:50 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 01:21:50 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 01:21:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 01:21:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 01:21:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 01:21:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:21:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:53 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:21:54
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.026230) are unknown; try again later
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:21:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:21:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 01:21:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:21:54 np0005539508 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:21:54 np0005539508 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:21:54 np0005539508 ceph-mon[74654]: Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 01:21:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 01:21:55 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 01:21:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 70 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=70 pruub=14.174674988s) [2] async=[2] r=-1 lpr=70 pi=[58,70)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.084030151s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 70 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=70 pruub=14.174575806s) [2] r=-1 lpr=70 pi=[58,70)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.084030151s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 984 B/s wr, 87 op/s; 6/210 objects misplaced (2.857%); 120 B/s, 4 objects/s recovering
Nov 29 01:21:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 01:21:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 01:21:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:56 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574925423s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867950439s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574789047s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867950439s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574648857s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867843628s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574518204s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867843628s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.573354721s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867782593s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.573298454s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867782593s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.572719574s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867492676s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.572625160s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867492676s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571574211s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867523193s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571413994s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571501732s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867752075s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571502686s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867919922s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571456909s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867752075s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:56 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571374893s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867919922s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:21:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 01:21:57 np0005539508 systemd-logind[797]: New session 35 of user zuul.
Nov 29 01:21:57 np0005539508 systemd[1]: Started Session 35 of User zuul.
Nov 29 01:21:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.2 KiB/s wr, 106 op/s; 6/210 objects misplaced (2.857%); 146 B/s, 4 objects/s recovering
Nov 29 01:21:58 np0005539508 python3.9[95855]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 01:21:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:21:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:21:58 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 01:21:58 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 01:21:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 01:21:59 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 01:21:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:21:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:21:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:21:59 np0005539508 python3.9[96029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:22:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.3 KiB/s wr, 121 op/s; 0 B/s, 0 objects/s recovering
Nov 29 01:22:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:00 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 01:22:00 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 01:22:01 np0005539508 python3.9[96185]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:22:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:01 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 29 01:22:01 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 29 01:22:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 108 op/s; 0 B/s, 0 objects/s recovering
Nov 29 01:22:02 np0005539508 python3.9[96338]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:22:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:02 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 01:22:02 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 01:22:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:03 np0005539508 python3.9[96492]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:22:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:22:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:22:03 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 01:22:03 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 01:22:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 127 B/s wr, 11 op/s; 41 B/s, 1 objects/s recovering
Nov 29 01:22:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 01:22:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 01:22:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 01:22:04 np0005539508 python3.9[96644]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:22:04 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 34d76df6-32e5-4f0c-9055-8e03a8da6814 (Global Recovery Event) in 20 seconds
Nov 29 01:22:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:04.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 01:22:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 01:22:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 110 B/s wr, 9 op/s; 35 B/s, 0 objects/s recovering
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 01:22:06 np0005539508 python3.9[96794]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:22:06 np0005539508 network[96811]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:22:06 np0005539508 network[96812]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:22:06 np0005539508 network[96813]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.420503616s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191528320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419773102s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191268921s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419677734s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191482544s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419614792s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191482544s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419064522s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191131592s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419006348s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191131592s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419088364s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:06 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.418711662s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191268921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:06.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 01:22:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:22:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:07.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:07 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 01:22:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 01:22:07 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 01:22:08 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 01:22:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 151 B/s, 4 objects/s recovering
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 01:22:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.681794167s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191467285s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.681725502s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191467285s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.680493355s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191268921s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.680408478s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191268921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679621696s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.190826416s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679498672s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.190826416s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679297447s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.190750122s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679251671s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.190750122s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:09 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 01:22:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:09 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 21 completed events
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:22:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 01:22:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 01:22:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 01:22:10 np0005539508 python3.9[97260]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:22:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 01:22:10 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:11 np0005539508 python3.9[97425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:22:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 01:22:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 01:22:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 38 B/s, 1 objects/s recovering
Nov 29 01:22:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 01:22:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 01:22:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.818251610s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.706832886s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.818084717s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.706848145s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813915253s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.702682495s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813832283s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.702682495s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813839912s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.702865601s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813767433s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.702865601s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.817247391s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.706832886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:12 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.817220688s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.706848145s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:22:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.6428453 +0000 UTC m=+4.102556115 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.725327411 +0000 UTC m=+4.185038156 container create ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20)
Nov 29 01:22:12 np0005539508 systemd[76267]: Created slice User Background Tasks Slice.
Nov 29 01:22:12 np0005539508 systemd[76267]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:22:12 np0005539508 systemd[1]: Started libpod-conmon-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope.
Nov 29 01:22:12 np0005539508 systemd[76267]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:22:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.815814852 +0000 UTC m=+4.275525607 container init ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph)
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.824824111 +0000 UTC m=+4.284534846 container start ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Nov 29 01:22:12 np0005539508 sleepy_hypatia[97587]: 0 0
Nov 29 01:22:12 np0005539508 systemd[1]: libpod-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope: Deactivated successfully.
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.832937414 +0000 UTC m=+4.292648179 container attach ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=)
Nov 29 01:22:12 np0005539508 podman[97045]: 2025-11-29 06:22:12.833452579 +0000 UTC m=+4.293163324 container died ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 01:22:12 np0005539508 systemd[1]: var-lib-containers-storage-overlay-14a0be28cbe8a27d1ba6b7d6e055081dbc513e533240cc7f364122d015dcc029-merged.mount: Deactivated successfully.
Nov 29 01:22:13 np0005539508 podman[97045]: 2025-11-29 06:22:13.02450018 +0000 UTC m=+4.484210925 container remove ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, vendor=Red Hat, Inc., name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 01:22:13 np0005539508 systemd[1]: libpod-conmon-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope: Deactivated successfully.
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:13 np0005539508 python3.9[97617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:13 np0005539508 systemd[1]: Reloading.
Nov 29 01:22:13 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:22:13 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:22:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 01:22:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 01:22:13 np0005539508 systemd[1]: Reloading.
Nov 29 01:22:13 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:22:13 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:22:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:13 np0005539508 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.uyqrbs for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 01:22:13 np0005539508 podman[97880]: 2025-11-29 06:22:13.86927605 +0000 UTC m=+0.021187410 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 01:22:13 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 01:22:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 36 B/s, 1 objects/s recovering
Nov 29 01:22:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:14 np0005539508 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Nov 29 01:22:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:15 np0005539508 python3.9[97928]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:22:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 63 B/s, 4 objects/s recovering
Nov 29 01:22:16 np0005539508 python3.9[98012]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:22:16 np0005539508 podman[97880]: 2025-11-29 06:22:16.725650908 +0000 UTC m=+2.877562238 container create c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 01:22:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59275f770a1a56dcc7697791c45a93f5dc6caab1bfa9bfceb0efcfcbcaa4aac0/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:16 np0005539508 podman[97880]: 2025-11-29 06:22:16.805153733 +0000 UTC m=+2.957065153 container init c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.expose-services=, version=2.2.4, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 01:22:16 np0005539508 podman[97880]: 2025-11-29 06:22:16.811006971 +0000 UTC m=+2.962918331 container start c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Starting VRRP child process, pid=4
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Startup complete
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 01:22:16 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: VRRP_Script(check_backend) succeeded
Nov 29 01:22:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 01:22:16 np0005539508 bash[97880]: c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade
Nov 29 01:22:16 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.191514969s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521408081s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.191367149s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521408081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190675735s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521423340s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190603256s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521423340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.183088303s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.514236450s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190187454s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521392822s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.183005333s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.514236450s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:16 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190085411s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521392822s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:16 np0005539508 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.uyqrbs for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 01:22:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 01:22:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:17 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 01:22:17 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 01:22:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 30 B/s, 2 objects/s recovering
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:18 np0005539508 ceph-mgr[74948]: [progress INFO root] complete: finished ev 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 01:22:18 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4)) in 64 seconds
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 01:22:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:18.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:18 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 01:22:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:19.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:20 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 22 completed events
Nov 29 01:22:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 4 peering, 4 active+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 78 B/s, 4 objects/s recovering
Nov 29 01:22:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:22:20 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 01:22:20 np0005539508 podman[98324]: 2025-11-29 06:22:20.215703367 +0000 UTC m=+0.625013525 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:20 np0005539508 podman[98324]: 2025-11-29 06:22:20.340344839 +0000 UTC m=+0.749654997 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:22:20 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:20 2025: (VI_0) Entering MASTER STATE
Nov 29 01:22:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:22:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:20.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:21 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 01:22:21 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 01:22:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 74 B/s, 4 objects/s recovering
Nov 29 01:22:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:22:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:22 np0005539508 podman[98492]: 2025-11-29 06:22:22.887942932 +0000 UTC m=+1.836599458 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 01:22:23 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 01:22:23 np0005539508 podman[98492]: 2025-11-29 06:22:23.147406359 +0000 UTC m=+2.096062885 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:22:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 01:22:24 np0005539508 podman[98578]: 2025-11-29 06:22:24.257474055 +0000 UTC m=+0.144815273 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.openshift.tags=Ceph keepalived, name=keepalived, io.openshift.expose-services=, vcs-type=git, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:24 np0005539508 podman[98578]: 2025-11-29 06:22:24.278600902 +0000 UTC m=+0.165942120 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:25 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 8825e13f-1524-4f42-96fe-4d5641d9472e does not exist
Nov 29 01:22:25 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 274b306b-a052-4fad-935b-f622c512e3ee does not exist
Nov 29 01:22:25 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b62b7db2-e7dd-4846-a176-bd5b9efc327a does not exist
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.225723075 +0000 UTC m=+0.101026175 container create aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.152958283 +0000 UTC m=+0.028261393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:26 np0005539508 systemd[1]: Started libpod-conmon-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope.
Nov 29 01:22:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.407164574s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 239.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.407092094s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.406921387s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 239.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.406853676s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.378047163 +0000 UTC m=+0.253350273 container init aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.385823506 +0000 UTC m=+0.261126586 container start aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.38941328 +0000 UTC m=+0.264716390 container attach aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:22:26 np0005539508 musing_einstein[98903]: 167 167
Nov 29 01:22:26 np0005539508 systemd[1]: libpod-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope: Deactivated successfully.
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.39323861 +0000 UTC m=+0.268541700 container died aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:22:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-0f6fa6523a98bcaa0ee5e39fe3260bdbf66f9ea7f57591e0b9bc87de2cdd922f-merged.mount: Deactivated successfully.
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:22:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 01:22:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:22:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:22:26 np0005539508 podman[98887]: 2025-11-29 06:22:26.848780873 +0000 UTC m=+0.724083953 container remove aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 01:22:26 np0005539508 systemd[1]: libpod-conmon-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope: Deactivated successfully.
Nov 29 01:22:27 np0005539508 podman[98928]: 2025-11-29 06:22:27.042103679 +0000 UTC m=+0.053272482 container create 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:22:27 np0005539508 podman[98928]: 2025-11-29 06:22:27.011390816 +0000 UTC m=+0.022559639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 01:22:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 01:22:27 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 01:22:27 np0005539508 systemd[1]: Started libpod-conmon-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope.
Nov 29 01:22:27 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:27 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.454995155s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191528320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:27 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.419450760s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.156372070s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:27 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.419392586s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.156372070s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:27 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.454208374s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:27 np0005539508 podman[98928]: 2025-11-29 06:22:27.363379873 +0000 UTC m=+0.374548766 container init 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:22:27 np0005539508 podman[98928]: 2025-11-29 06:22:27.370254901 +0000 UTC m=+0.381423734 container start 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:22:27 np0005539508 podman[98928]: 2025-11-29 06:22:27.451427224 +0000 UTC m=+0.462596107 container attach 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:22:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:27 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 01:22:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 01:22:28 np0005539508 distracted_curie[98945]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:22:28 np0005539508 distracted_curie[98945]: --> relative data size: 1.0
Nov 29 01:22:28 np0005539508 distracted_curie[98945]: --> All data devices are unavailable
Nov 29 01:22:28 np0005539508 systemd[1]: libpod-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope: Deactivated successfully.
Nov 29 01:22:28 np0005539508 podman[98928]: 2025-11-29 06:22:28.281212713 +0000 UTC m=+1.292381546 container died 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 01:22:28 np0005539508 systemd[1]: var-lib-containers-storage-overlay-ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed-merged.mount: Deactivated successfully.
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 01:22:28 np0005539508 podman[98928]: 2025-11-29 06:22:28.489857781 +0000 UTC m=+1.501026634 container remove 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274819374s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191848755s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274504662s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191528320s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274509430s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191848755s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274133682s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:28 np0005539508 systemd[1]: libpod-conmon-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope: Deactivated successfully.
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 01:22:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 01:22:28 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:28 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:22:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:29 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 01:22:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.155645675 +0000 UTC m=+0.055147496 container create 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:22:29 np0005539508 systemd[1]: Started libpod-conmon-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope.
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.131140751 +0000 UTC m=+0.030642592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:29 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.274744218 +0000 UTC m=+0.174246039 container init 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.283057567 +0000 UTC m=+0.182559378 container start 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:22:29 np0005539508 tender_mclaren[99135]: 167 167
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.29148428 +0000 UTC m=+0.190986091 container attach 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:22:29 np0005539508 systemd[1]: libpod-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope: Deactivated successfully.
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.293283681 +0000 UTC m=+0.192785502 container died 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 01:22:29 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8234eb08ca3e82fa3c775e9787e1c83e343253caa637b471d5aab86ca868a17e-merged.mount: Deactivated successfully.
Nov 29 01:22:29 np0005539508 podman[99119]: 2025-11-29 06:22:29.426166671 +0000 UTC m=+0.325668482 container remove 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:22:29 np0005539508 systemd[1]: libpod-conmon-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope: Deactivated successfully.
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:22:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:22:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:29.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:29 np0005539508 podman[99161]: 2025-11-29 06:22:29.598758621 +0000 UTC m=+0.037435927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:29 np0005539508 podman[99161]: 2025-11-29 06:22:29.840674974 +0000 UTC m=+0.279352250 container create 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 01:22:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 01:22:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 01:22:30 np0005539508 systemd[1]: Started libpod-conmon-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope.
Nov 29 01:22:30 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:30 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:30 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 01:22:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 01:22:31 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.403115273s) [2] async=[2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 249.484466553s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:31 np0005539508 podman[99161]: 2025-11-29 06:22:31.657419451 +0000 UTC m=+2.096096747 container init 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:22:31 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.402838707s) [2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.484466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:31 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.406016350s) [2] async=[2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 249.488967896s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:31 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.405915260s) [2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.488967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:31 np0005539508 podman[99161]: 2025-11-29 06:22:31.665340759 +0000 UTC m=+2.104018035 container start 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:22:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 4 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 112 B/s, 4 objects/s recovering
Nov 29 01:22:32 np0005539508 keen_diffie[99177]: {
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:    "1": [
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:        {
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "devices": [
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "/dev/loop3"
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            ],
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "lv_name": "ceph_lv0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "lv_size": "7511998464",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "name": "ceph_lv0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "tags": {
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.cluster_name": "ceph",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.crush_device_class": "",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.encrypted": "0",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.osd_id": "1",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.type": "block",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:                "ceph.vdo": "0"
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            },
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "type": "block",
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:            "vg_name": "ceph_vg0"
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:        }
Nov 29 01:22:32 np0005539508 keen_diffie[99177]:    ]
Nov 29 01:22:32 np0005539508 keen_diffie[99177]: }
Nov 29 01:22:32 np0005539508 systemd[1]: libpod-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Deactivated successfully.
Nov 29 01:22:32 np0005539508 systemd[1]: libpod-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Consumed 1.016s CPU time.
Nov 29 01:22:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:32.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:33 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 01:22:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 2 peering, 2 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Nov 29 01:22:34 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 01:22:34 np0005539508 podman[99161]: 2025-11-29 06:22:34.427517848 +0000 UTC m=+4.866195224 container attach 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:22:34 np0005539508 podman[99161]: 2025-11-29 06:22:34.428867006 +0000 UTC m=+4.867544322 container died 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:22:34 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:34 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 01:22:34 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 01:22:34 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:22:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f-merged.mount: Deactivated successfully.
Nov 29 01:22:35 np0005539508 podman[99161]: 2025-11-29 06:22:35.193132113 +0000 UTC m=+5.631809409 container remove 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:22:35 np0005539508 systemd[1]: libpod-conmon-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Deactivated successfully.
Nov 29 01:22:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:35.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:35 np0005539508 podman[99352]: 2025-11-29 06:22:35.86210338 +0000 UTC m=+0.029149349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:35 np0005539508 podman[99352]: 2025-11-29 06:22:35.96614225 +0000 UTC m=+0.133188209 container create 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:22:36 np0005539508 systemd[1]: Started libpod-conmon-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope.
Nov 29 01:22:36 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 530 B/s wr, 47 op/s; 56 B/s, 3 objects/s recovering
Nov 29 01:22:36 np0005539508 podman[99352]: 2025-11-29 06:22:36.107981296 +0000 UTC m=+0.275027275 container init 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:22:36 np0005539508 podman[99352]: 2025-11-29 06:22:36.118442277 +0000 UTC m=+0.285488236 container start 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:22:36 np0005539508 zen_cohen[99368]: 167 167
Nov 29 01:22:36 np0005539508 systemd[1]: libpod-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope: Deactivated successfully.
Nov 29 01:22:36 np0005539508 podman[99352]: 2025-11-29 06:22:36.125587022 +0000 UTC m=+0.292632981 container attach 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:22:36 np0005539508 podman[99352]: 2025-11-29 06:22:36.125996674 +0000 UTC m=+0.293042633 container died 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:22:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-03bad5bd9e51593b12fdfd852e6484b69f72177503b6a6bbecaf9126ea0796bd-merged.mount: Deactivated successfully.
Nov 29 01:22:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:36.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 01:22:37 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 01:22:37 np0005539508 podman[99352]: 2025-11-29 06:22:37.21834861 +0000 UTC m=+1.385394609 container remove 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:22:37 np0005539508 systemd[1]: libpod-conmon-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope: Deactivated successfully.
Nov 29 01:22:37 np0005539508 podman[99391]: 2025-11-29 06:22:37.398461407 +0000 UTC m=+0.027763429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:37 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455229759s) [2] async=[2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 257.489105225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:37 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455081940s) [2] async=[2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 257.489105225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:37 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455101013s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.489105225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:37 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.454952240s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.489105225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:37 np0005539508 podman[99391]: 2025-11-29 06:22:37.821529517 +0000 UTC m=+0.450831519 container create 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 511 B/s wr, 45 op/s; 54 B/s, 2 objects/s recovering
Nov 29 01:22:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 01:22:38 np0005539508 systemd[1]: Started libpod-conmon-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope.
Nov 29 01:22:38 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:22:38 np0005539508 podman[99391]: 2025-11-29 06:22:38.515213194 +0000 UTC m=+1.144515256 container init 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:22:38 np0005539508 podman[99391]: 2025-11-29 06:22:38.528121025 +0000 UTC m=+1.157423027 container start 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:38 np0005539508 podman[99391]: 2025-11-29 06:22:38.656154945 +0000 UTC m=+1.285456977 container attach 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:22:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]: {
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:        "osd_id": 1,
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:        "type": "bluestore"
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]:    }
Nov 29 01:22:39 np0005539508 charming_sanderson[99411]: }
Nov 29 01:22:39 np0005539508 systemd[1]: libpod-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope: Deactivated successfully.
Nov 29 01:22:39 np0005539508 podman[99391]: 2025-11-29 06:22:39.45724632 +0000 UTC m=+2.086548372 container died 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:22:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6-merged.mount: Deactivated successfully.
Nov 29 01:22:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 01:22:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 01:22:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 01:22:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:40.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:40 np0005539508 podman[99391]: 2025-11-29 06:22:40.90116336 +0000 UTC m=+3.530465412 container remove 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:22:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 01:22:40 np0005539508 systemd[1]: libpod-conmon-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope: Deactivated successfully.
Nov 29 01:22:41 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 01:22:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.302009583s) [0] async=[0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 255.009017944s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.301831245s) [0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.009017944s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.193807602s) [0] async=[0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 254.901672363s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:22:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.193541527s) [0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.901672363s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:22:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:41.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 01:22:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000059s ======
Nov 29 01:22:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Nov 29 01:22:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 01:22:43 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 01:22:43 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 01:22:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 01:22:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 01:22:44 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 01:22:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:44 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a460cd3d-4bbf-4556-a1b6-57f8cc8d048e does not exist
Nov 29 01:22:44 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 48a3ec61-0af0-475d-8b6a-93cda3a0dca9 does not exist
Nov 29 01:22:44 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 2079cd49-7755-4b35-a6e3-4391fd914b00 does not exist
Nov 29 01:22:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:22:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:45 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 01:22:45 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:45 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:22:45 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:22:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:45.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:45 np0005539508 podman[99706]: 2025-11-29 06:22:45.76617007 +0000 UTC m=+0.116453601 container create 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:22:45 np0005539508 podman[99706]: 2025-11-29 06:22:45.681458252 +0000 UTC m=+0.031741883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:45 np0005539508 systemd[1]: Started libpod-conmon-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope.
Nov 29 01:22:45 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 01:22:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:22:46 np0005539508 podman[99706]: 2025-11-29 06:22:46.275489747 +0000 UTC m=+0.625773368 container init 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:22:46 np0005539508 podman[99706]: 2025-11-29 06:22:46.286938373 +0000 UTC m=+0.637221904 container start 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:22:46 np0005539508 nifty_mirzakhani[99722]: 167 167
Nov 29 01:22:46 np0005539508 systemd[1]: libpod-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope: Deactivated successfully.
Nov 29 01:22:46 np0005539508 podman[99706]: 2025-11-29 06:22:46.30863789 +0000 UTC m=+0.658921451 container attach 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 01:22:46 np0005539508 podman[99706]: 2025-11-29 06:22:46.310948568 +0000 UTC m=+0.661232119 container died 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:22:46 np0005539508 systemd[1]: var-lib-containers-storage-overlay-30ad1e9783d23b1b12afd731a94a888087c0a1113c88e3620719ff348631cb8e-merged.mount: Deactivated successfully.
Nov 29 01:22:46 np0005539508 podman[99706]: 2025-11-29 06:22:46.42371977 +0000 UTC m=+0.774003331 container remove 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:22:46 np0005539508 systemd[1]: libpod-conmon-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope: Deactivated successfully.
Nov 29 01:22:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:47 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 01:22:47 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:47 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:22:47 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 01:22:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:22:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:47.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:22:47 np0005539508 podman[99859]: 2025-11-29 06:22:47.760705962 +0000 UTC m=+0.065546246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:47 np0005539508 podman[99859]: 2025-11-29 06:22:47.858239436 +0000 UTC m=+0.163079690 container create ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:22:47 np0005539508 systemd[1]: Started libpod-conmon-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope.
Nov 29 01:22:47 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 01:22:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 01:22:48 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 01:22:48 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 29 01:22:48 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 29 01:22:48 np0005539508 podman[99859]: 2025-11-29 06:22:48.305418038 +0000 UTC m=+0.610258312 container init ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:22:48 np0005539508 podman[99859]: 2025-11-29 06:22:48.316694869 +0000 UTC m=+0.621535143 container start ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:22:48 np0005539508 funny_ritchie[99875]: 167 167
Nov 29 01:22:48 np0005539508 systemd[1]: libpod-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope: Deactivated successfully.
Nov 29 01:22:48 np0005539508 ceph-mgr[74948]: [progress INFO root] Completed event 7f07609f-e0c7-4950-b4cf-712380532355 (Global Recovery Event) in 33 seconds
Nov 29 01:22:48 np0005539508 podman[99859]: 2025-11-29 06:22:48.594481267 +0000 UTC m=+0.899321551 container attach ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:22:48 np0005539508 podman[99859]: 2025-11-29 06:22:48.595078304 +0000 UTC m=+0.899918588 container died ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:22:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 01:22:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:48.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 01:22:49 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4e15a2e2b64ff8d07b62b298690764860e16653a0358a82663c98515823cc01f-merged.mount: Deactivated successfully.
Nov 29 01:22:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 01:22:49 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 01:22:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 1 objects/s recovering
Nov 29 01:22:50 np0005539508 podman[99859]: 2025-11-29 06:22:50.129953869 +0000 UTC m=+2.434794123 container remove ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 01:22:50 np0005539508 systemd[1]: libpod-conmon-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope: Deactivated successfully.
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 01:22:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:50.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:51 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 01:22:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:51 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 01:22:51 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 01:22:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:51.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:51 np0005539508 podman[100021]: 2025-11-29 06:22:51.614593127 +0000 UTC m=+0.027321233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 01:22:51 np0005539508 podman[100021]: 2025-11-29 06:22:51.787092453 +0000 UTC m=+0.199820589 container create b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:22:51 np0005539508 systemd[1]: Started libpod-conmon-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope.
Nov 29 01:22:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:22:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 01:22:52 np0005539508 podman[100021]: 2025-11-29 06:22:52.13466438 +0000 UTC m=+0.547392506 container init b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:22:52 np0005539508 podman[100021]: 2025-11-29 06:22:52.143530531 +0000 UTC m=+0.556258637 container start b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:22:52 np0005539508 quirky_khayyam[100037]: 167 167
Nov 29 01:22:52 np0005539508 systemd[1]: libpod-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope: Deactivated successfully.
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 01:22:52 np0005539508 podman[100021]: 2025-11-29 06:22:52.581950546 +0000 UTC m=+0.994678662 container attach b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 01:22:52 np0005539508 podman[100021]: 2025-11-29 06:22:52.58311545 +0000 UTC m=+0.995843566 container died b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:22:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 01:22:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 01:22:52 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a910b4efb09f0310fb20a07eacdcf0d75d5baf45b2c55b3877e5012c8d19fa84-merged.mount: Deactivated successfully.
Nov 29 01:22:53 np0005539508 podman[100021]: 2025-11-29 06:22:53.261073559 +0000 UTC m=+1.673801675 container remove b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:22:53 np0005539508 systemd[1]: libpod-conmon-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope: Deactivated successfully.
Nov 29 01:22:53 np0005539508 ceph-mgr[74948]: [progress INFO root] Writing back 23 completed events
Nov 29 01:22:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 01:22:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:22:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 01:22:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:22:54
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['backups', '.rgw.root', 'vms', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:22:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:22:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:54.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:55 np0005539508 ceph-mon[74654]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 01:22:55 np0005539508 ceph-mon[74654]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 01:22:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 01:22:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 01:22:55 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 01:22:55 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 01:22:55 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 01:22:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 01:22:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 01:22:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 01:22:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 01:22:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:56.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 01:22:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:57 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 01:22:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:22:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:22:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:22:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 01:22:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:22:58 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 29 01:22:58 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:22:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:22:58 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Nov 29 01:22:58 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Nov 29 01:22:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:22:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:22:59 np0005539508 podman[100245]: 2025-11-29 06:22:59.033589639 +0000 UTC m=+0.032225707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:22:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:22:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:22:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:01 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Nov 29 01:23:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 01:23:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 01:23:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 01:23:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:01.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:01 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Nov 29 01:23:01 np0005539508 podman[100245]: 2025-11-29 06:23:01.859081135 +0000 UTC m=+2.857717183 container create 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:23:01 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 01:23:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:02.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:03 np0005539508 systemd[1]: Started libpod-conmon-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope.
Nov 29 01:23:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 01:23:03 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: Reconfiguring osd.1 (monmap changed)...
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 01:23:04 np0005539508 ceph-mon[74654]: Reconfiguring daemon osd.1 on compute-0
Nov 29 01:23:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:04.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:04 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 01:23:05 np0005539508 podman[100245]: 2025-11-29 06:23:05.08913605 +0000 UTC m=+6.087772128 container init 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:23:05 np0005539508 podman[100245]: 2025-11-29 06:23:05.100618198 +0000 UTC m=+6.099254286 container start 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:23:05 np0005539508 elastic_lamarr[100264]: 167 167
Nov 29 01:23:05 np0005539508 systemd[1]: libpod-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope: Deactivated successfully.
Nov 29 01:23:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 119 B/s wr, 20 op/s; 76 B/s, 2 objects/s recovering
Nov 29 01:23:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 01:23:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:06 np0005539508 podman[100245]: 2025-11-29 06:23:06.24891551 +0000 UTC m=+7.247551588 container attach 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:23:06 np0005539508 podman[100245]: 2025-11-29 06:23:06.249789785 +0000 UTC m=+7.248425833 container died 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:23:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 01:23:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:07 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 01:23:07 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 29 01:23:07 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 29 01:23:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-569ce47c36525046265bdd1c2731c732dd0a48cce7ed60c4ab8bb425936298d4-merged.mount: Deactivated successfully.
Nov 29 01:23:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 01:23:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 01:23:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 01:23:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 127 B/s wr, 22 op/s; 82 B/s, 3 objects/s recovering
Nov 29 01:23:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:08.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 01:23:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:09 np0005539508 podman[100245]: 2025-11-29 06:23:09.84247574 +0000 UTC m=+10.841111798 container remove 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:23:09 np0005539508 systemd[1]: libpod-conmon-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope: Deactivated successfully.
Nov 29 01:23:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 21 op/s; 80 B/s, 3 objects/s recovering
Nov 29 01:23:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 01:23:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:10 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 01:23:10 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 01:23:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 01:23:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 17 op/s; 32 B/s, 1 objects/s recovering
Nov 29 01:23:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 01:23:12 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 01:23:12 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 01:23:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.724886004094547e-06 of space, bias 1.0, pg target 0.002017465801228364 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:23:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:23:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 01:23:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:13.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:14 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Nov 29 01:23:14 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Nov 29 01:23:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 01:23:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:17 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:17.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 01:23:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:18 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 01:23:18 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:18 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 01:23:18 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 01:23:18 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 01:23:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:23:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 01:23:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 01:23:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:20 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 29 01:23:20 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 29 01:23:20 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 01:23:20 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 01:23:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 01:23:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 01:23:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:20 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Nov 29 01:23:20 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 01:23:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 01:23:22 np0005539508 python3.9[100500]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:23:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 01:23:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 01:23:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:23:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:23 np0005539508 ceph-mon[74654]: Reconfiguring osd.0 (monmap changed)...
Nov 29 01:23:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 01:23:23 np0005539508 ceph-mon[74654]: Reconfiguring daemon osd.0 on compute-1
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 296 B/s wr, 25 op/s; 12/214 objects misplaced (5.607%); 18 B/s, 1 objects/s recovering
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 01:23:24 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 01:23:24 np0005539508 python3.9[100792]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 01:23:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:23:25 np0005539508 ceph-mon[74654]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 01:23:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:25 np0005539508 python3.9[100945]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 01:23:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 15 B/s, 1 objects/s recovering
Nov 29 01:23:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:23:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 01:23:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:26.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:26 np0005539508 python3.9[101097]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:23:26 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 01:23:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 01:23:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:27.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 01:23:28 np0005539508 python3.9[101250]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 01:23:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:23:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 01:23:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 01:23:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:28.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:23:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:23:29 np0005539508 python3.9[101403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:23:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 01:23:30 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 01:23:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 2 activating+remapped, 303 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 01:23:30 np0005539508 python3.9[101555]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:23:30 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:30.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:31 np0005539508 python3.9[101634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:23:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:32 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 01:23:32 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 01:23:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 01:23:32 np0005539508 python3.9[101786]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:23:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:32 np0005539508 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 01:23:32 np0005539508 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 01:23:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:32.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:23:34 np0005539508 python3.9[101941]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 01:23:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 01:23:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 01:23:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:34.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:35 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 01:23:35 np0005539508 python3.9[102095]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 01:23:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 1 objects/s recovering
Nov 29 01:23:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 01:23:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:36 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 01:23:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:37 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 01:23:37 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 01:23:37 np0005539508 python3.9[102249]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 100 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=100 pruub=8.477513313s) [0] r=-1 lpr=100 pi=[58,100)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 311.193450928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 100 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=100 pruub=8.477451324s) [0] r=-1 lpr=100 pi=[58,100)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 311.193450928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 01:23:38 np0005539508 python3.9[102490]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 01:23:38 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101 pruub=8.267313957s) [0] r=-1 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 311.193511963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101 pruub=8.267251968s) [0] r=-1 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 311.193511963s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:38 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:23:38 np0005539508 podman[102597]: 2025-11-29 06:23:38.792478206 +0000 UTC m=+0.065997659 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:23:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:23:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:38.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:23:38 np0005539508 podman[102597]: 2025-11-29 06:23:38.903295191 +0000 UTC m=+0.176814644 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:23:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 01:23:39 np0005539508 podman[102828]: 2025-11-29 06:23:39.59110332 +0000 UTC m=+0.060506158 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:23:39 np0005539508 podman[102828]: 2025-11-29 06:23:39.605169443 +0000 UTC m=+0.074572221 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:23:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:23:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:39.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:39 np0005539508 podman[102945]: 2025-11-29 06:23:39.812514862 +0000 UTC m=+0.068386660 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Nov 29 01:23:39 np0005539508 podman[102945]: 2025-11-29 06:23:39.853477145 +0000 UTC m=+0.109348943 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., release=1793, vcs-type=git, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Nov 29 01:23:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:23:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 01:23:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 01:23:40 np0005539508 python3.9[103031]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:23:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 29 01:23:40 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 29 01:23:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:40.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:41.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 01:23:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 01:23:42 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=103 pruub=11.911358833s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 319.193572998s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=103 pruub=11.911271095s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 319.193572998s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=101/58 les/c/f=102/59/0 sis=103 pruub=15.364741325s) [0] async=[0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 322.646942139s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=101/58 les/c/f=102/59/0 sis=103 pruub=15.364190102s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 322.646942139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] async=[0] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:23:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 01:23:42 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:43 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f5ad9944-795b-4b49-8a18-ab9d102a0260 does not exist
Nov 29 01:23:43 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev c892b06b-5a02-440f-9c92-6d43c04c7a6b does not exist
Nov 29 01:23:43 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a47d4639-2471-4d54-bd0f-cacc1daaa05d does not exist
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:23:43 np0005539508 python3.9[103417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.688944259 +0000 UTC m=+0.040814629 container create 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:23:43 np0005539508 systemd[1]: Started libpod-conmon-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope.
Nov 29 01:23:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:43.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:43 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.670359694 +0000 UTC m=+0.022230084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.792239183 +0000 UTC m=+0.144109583 container init 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.799148426 +0000 UTC m=+0.151018796 container start 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.803085951 +0000 UTC m=+0.154956321 container attach 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:23:43 np0005539508 agitated_bell[103493]: 167 167
Nov 29 01:23:43 np0005539508 systemd[1]: libpod-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope: Deactivated successfully.
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.806211773 +0000 UTC m=+0.158082163 container died 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 01:23:43 np0005539508 systemd[1]: var-lib-containers-storage-overlay-293d0eec92dfd52a619978722a5c91cbb549b524c75db94693551e452e74e565-merged.mount: Deactivated successfully.
Nov 29 01:23:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 01:23:43 np0005539508 podman[103459]: 2025-11-29 06:23:43.851682938 +0000 UTC m=+0.203553318 container remove 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:23:43 np0005539508 systemd[1]: libpod-conmon-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope: Deactivated successfully.
Nov 29 01:23:44 np0005539508 podman[103524]: 2025-11-29 06:23:44.030499739 +0000 UTC m=+0.059886719 container create 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 01:23:44 np0005539508 systemd[1]: Started libpod-conmon-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope.
Nov 29 01:23:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:44 np0005539508 podman[103524]: 2025-11-29 06:23:44.001206058 +0000 UTC m=+0.030593108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:44 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:44 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:44 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:44 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:44 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:44 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:44 np0005539508 podman[103524]: 2025-11-29 06:23:44.118152273 +0000 UTC m=+0.147539273 container init 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:23:44 np0005539508 podman[103524]: 2025-11-29 06:23:44.12554297 +0000 UTC m=+0.154929950 container start 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:23:44 np0005539508 podman[103524]: 2025-11-29 06:23:44.129060383 +0000 UTC m=+0.158447383 container attach 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:23:44 np0005539508 python3.9[103673]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:23:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:44 np0005539508 unruffled_chandrasekhar[103564]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:23:44 np0005539508 unruffled_chandrasekhar[103564]: --> relative data size: 1.0
Nov 29 01:23:44 np0005539508 unruffled_chandrasekhar[103564]: --> All data devices are unavailable
Nov 29 01:23:45 np0005539508 systemd[1]: libpod-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope: Deactivated successfully.
Nov 29 01:23:45 np0005539508 podman[103524]: 2025-11-29 06:23:45.027529798 +0000 UTC m=+1.056916768 container died 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:23:45 np0005539508 python3.9[103755]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:23:45 np0005539508 systemd[1]: var-lib-containers-storage-overlay-fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c-merged.mount: Deactivated successfully.
Nov 29 01:23:45 np0005539508 podman[103524]: 2025-11-29 06:23:45.086510781 +0000 UTC m=+1.115897801 container remove 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:23:45 np0005539508 systemd[1]: libpod-conmon-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope: Deactivated successfully.
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.630011942 +0000 UTC m=+0.050375961 container create 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:23:45 np0005539508 systemd[1]: Started libpod-conmon-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope.
Nov 29 01:23:45 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.605572274 +0000 UTC m=+0.025936373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.716425129 +0000 UTC m=+0.136789228 container init 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.723923939 +0000 UTC m=+0.144287958 container start 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.727654379 +0000 UTC m=+0.148018418 container attach 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 01:23:45 np0005539508 systemd[1]: libpod-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope: Deactivated successfully.
Nov 29 01:23:45 np0005539508 jovial_feynman[103957]: 167 167
Nov 29 01:23:45 np0005539508 conmon[103957]: conmon 8ebb4f9a189399445786 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope/container/memory.events
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.731145581 +0000 UTC m=+0.151509630 container died 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:23:45 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b67e681f0ca82b488ec9acdd24020114212542b7baa7b3543d58d8619f3105ef-merged.mount: Deactivated successfully.
Nov 29 01:23:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:45 np0005539508 podman[103940]: 2025-11-29 06:23:45.779735868 +0000 UTC m=+0.200099907 container remove 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:23:45 np0005539508 systemd[1]: libpod-conmon-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope: Deactivated successfully.
Nov 29 01:23:45 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 01:23:45 np0005539508 podman[104027]: 2025-11-29 06:23:45.984986246 +0000 UTC m=+0.040781209 container create fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:23:46 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 01:23:46 np0005539508 systemd[1]: Started libpod-conmon-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope.
Nov 29 01:23:46 np0005539508 podman[104027]: 2025-11-29 06:23:45.966561755 +0000 UTC m=+0.022356758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:46 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:46 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:46 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:46 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:46 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:23:46 np0005539508 podman[104027]: 2025-11-29 06:23:46.092504223 +0000 UTC m=+0.148299276 container init fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:23:46 np0005539508 podman[104027]: 2025-11-29 06:23:46.105546077 +0000 UTC m=+0.161341080 container start fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:23:46 np0005539508 podman[104027]: 2025-11-29 06:23:46.110238094 +0000 UTC m=+0.166033107 container attach fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:23:46 np0005539508 python3.9[104129]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:23:46 np0005539508 zen_boyd[104072]: {
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:    "1": [
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:        {
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "devices": [
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "/dev/loop3"
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            ],
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "lv_name": "ceph_lv0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "lv_size": "7511998464",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "name": "ceph_lv0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "tags": {
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.cluster_name": "ceph",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.crush_device_class": "",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.encrypted": "0",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.osd_id": "1",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.type": "block",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:                "ceph.vdo": "0"
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            },
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "type": "block",
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:            "vg_name": "ceph_vg0"
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:        }
Nov 29 01:23:46 np0005539508 zen_boyd[104072]:    ]
Nov 29 01:23:46 np0005539508 zen_boyd[104072]: }
Nov 29 01:23:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:46 np0005539508 systemd[1]: libpod-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope: Deactivated successfully.
Nov 29 01:23:46 np0005539508 podman[104027]: 2025-11-29 06:23:46.932710721 +0000 UTC m=+0.988505714 container died fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:23:47 np0005539508 python3.9[104209]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:23:47 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725-merged.mount: Deactivated successfully.
Nov 29 01:23:47 np0005539508 podman[104027]: 2025-11-29 06:23:47.115256283 +0000 UTC m=+1.171051246 container remove fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:23:47 np0005539508 systemd[1]: libpod-conmon-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope: Deactivated successfully.
Nov 29 01:23:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 01:23:47 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=102/58 les/c/f=103/59/0 sis=104 pruub=11.483637810s) [0] async=[0] r=-1 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 323.288909912s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=102/58 les/c/f=103/59/0 sis=104 pruub=11.483239174s) [0] r=-1 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 323.288909912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:23:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:47 np0005539508 podman[104470]: 2025-11-29 06:23:47.85736643 +0000 UTC m=+0.058017627 container create e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 01:23:47 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 01:23:47 np0005539508 systemd[1]: Started libpod-conmon-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope.
Nov 29 01:23:47 np0005539508 podman[104470]: 2025-11-29 06:23:47.830353294 +0000 UTC m=+0.031004571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:47 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:48 np0005539508 podman[104470]: 2025-11-29 06:23:48.010872677 +0000 UTC m=+0.211523944 container init e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:23:48 np0005539508 podman[104470]: 2025-11-29 06:23:48.022838741 +0000 UTC m=+0.223489978 container start e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:23:48 np0005539508 tender_einstein[104532]: 167 167
Nov 29 01:23:48 np0005539508 systemd[1]: libpod-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope: Deactivated successfully.
Nov 29 01:23:48 np0005539508 podman[104470]: 2025-11-29 06:23:48.037015468 +0000 UTC m=+0.237666745 container attach e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:23:48 np0005539508 podman[104470]: 2025-11-29 06:23:48.038374287 +0000 UTC m=+0.239025524 container died e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:23:48 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9003ef961812caaec732846723c274499f48864042770d704a520677fcd4bba3-merged.mount: Deactivated successfully.
Nov 29 01:23:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 01:23:48 np0005539508 python3.9[104536]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:23:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:23:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:23:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:23:48 np0005539508 podman[104470]: 2025-11-29 06:23:48.334125888 +0000 UTC m=+0.534777095 container remove e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:23:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 01:23:48 np0005539508 systemd[1]: libpod-conmon-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope: Deactivated successfully.
Nov 29 01:23:48 np0005539508 podman[104562]: 2025-11-29 06:23:48.493131614 +0000 UTC m=+0.037219920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:23:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:49 np0005539508 podman[104562]: 2025-11-29 06:23:49.0885564 +0000 UTC m=+0.632644696 container create c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:23:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 01:23:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 01:23:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 01:23:51 np0005539508 systemd[1]: Started libpod-conmon-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope.
Nov 29 01:23:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:23:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:23:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:23:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:23:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 01:23:52 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 01:23:52 np0005539508 python3.9[104732]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:23:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:23:54
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:23:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:23:54 np0005539508 python3.9[104885]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 01:23:54 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 01:23:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:23:55 np0005539508 podman[104562]: 2025-11-29 06:23:55.30964532 +0000 UTC m=+6.853733706 container init c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:23:55 np0005539508 podman[104562]: 2025-11-29 06:23:55.320000047 +0000 UTC m=+6.864088343 container start c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:23:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 105 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] async=[0] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:23:55 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 01:23:55 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 01:23:55 np0005539508 podman[104562]: 2025-11-29 06:23:55.387143895 +0000 UTC m=+6.931232191 container attach c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:23:55 np0005539508 python3.9[105036]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:23:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%)
Nov 29 01:23:56 np0005539508 funny_greider[104580]: {
Nov 29 01:23:56 np0005539508 funny_greider[104580]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:23:56 np0005539508 funny_greider[104580]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:23:56 np0005539508 funny_greider[104580]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:23:56 np0005539508 funny_greider[104580]:        "osd_id": 1,
Nov 29 01:23:56 np0005539508 funny_greider[104580]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:23:56 np0005539508 funny_greider[104580]:        "type": "bluestore"
Nov 29 01:23:56 np0005539508 funny_greider[104580]:    }
Nov 29 01:23:56 np0005539508 funny_greider[104580]: }
Nov 29 01:23:56 np0005539508 systemd[1]: libpod-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope: Deactivated successfully.
Nov 29 01:23:56 np0005539508 podman[104562]: 2025-11-29 06:23:56.16119446 +0000 UTC m=+7.705282776 container died c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:23:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 01:23:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 01:23:56 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 01:23:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:57 np0005539508 python3.9[105219]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:23:57 np0005539508 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 01:23:57 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 01:23:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:23:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:57.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:23:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:23:58 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 01:23:58 np0005539508 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 01:23:58 np0005539508 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 01:23:58 np0005539508 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 01:23:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:23:59 np0005539508 systemd[1]: var-lib-containers-storage-overlay-fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d-merged.mount: Deactivated successfully.
Nov 29 01:23:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 01:23:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:23:59 np0005539508 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 01:23:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 01:23:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:23:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:23:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:24:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 01:24:00 np0005539508 python3.9[105432]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 01:24:00 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 01:24:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:00.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:24:02 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 01:24:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:24:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 01:24:02 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 01:24:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 01:24:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 01:24:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:03 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 106 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=104/58 les/c/f=105/59/0 sis=106 pruub=8.048233032s) [0] async=[0] r=-1 lpr=106 pi=[58,106)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 335.750701904s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:03 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 106 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=104/58 les/c/f=105/59/0 sis=106 pruub=8.047649384s) [0] r=-1 lpr=106 pi=[58,106)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 335.750701904s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 01:24:03 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 01:24:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:04 np0005539508 podman[104562]: 2025-11-29 06:24:04.557898887 +0000 UTC m=+16.101987183 container remove c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:24:04 np0005539508 systemd[1]: libpod-conmon-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope: Deactivated successfully.
Nov 29 01:24:04 np0005539508 python3.9[105593]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:24:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 01:24:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 01:24:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 01:24:05 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 01:24:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:05 np0005539508 python3.9[105750]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:24:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:24:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 01:24:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 01:24:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 29 01:24:06 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 29 01:24:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:07 np0005539508 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 01:24:07 np0005539508 systemd[1]: session-35.scope: Consumed 1min 11.587s CPU time.
Nov 29 01:24:07 np0005539508 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Nov 29 01:24:07 np0005539508 systemd-logind[797]: Removed session 35.
Nov 29 01:24:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:24:07 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 526ebd6b-5022-488f-94e1-22537738e9ee does not exist
Nov 29 01:24:07 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 67a0cce2-3301-4f33-bd35-bfbc77f648b8 does not exist
Nov 29 01:24:07 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 34152fcc-0354-4358-9e09-156df2d0b0fe does not exist
Nov 29 01:24:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:07.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 01:24:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:08.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:09.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:10 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 01:24:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:10 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 01:24:11 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 01:24:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 01:24:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:11.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 01:24:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:24:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:24:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:24:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:13 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 01:24:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 01:24:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 01:24:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:15 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 29 01:24:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 01:24:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:16 np0005539508 systemd-logind[797]: New session 36 of user zuul.
Nov 29 01:24:16 np0005539508 systemd[1]: Started Session 36 of User zuul.
Nov 29 01:24:16 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 29 01:24:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:16.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:17 np0005539508 python3.9[105987]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:24:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:17.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:18 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 29 01:24:18 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 29 01:24:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:18.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:19 np0005539508 python3.9[106147]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 01:24:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:19 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:19.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:19 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 01:24:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 01:24:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 01:24:20 np0005539508 python3.9[106350]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:24:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 01:24:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 01:24:21 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 29 01:24:21 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 29 01:24:21 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 01:24:21 np0005539508 python3.9[106435]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 01:24:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:21.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 01:24:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 01:24:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 01:24:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 01:24:22 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 01:24:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:22.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 01:24:24 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 01:24:24 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:24 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 01:24:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 01:24:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 01:24:25 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 01:24:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:26 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 01:24:26 np0005539508 python3.9[106592]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:24:26 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 01:24:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 01:24:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:27 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 01:24:27 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 01:24:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 01:24:28 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 01:24:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 01:24:29 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:24:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:24:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:30 np0005539508 python3.9[106749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:24:30 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 01:24:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 01:24:30 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 01:24:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 01:24:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:24:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:31.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:24:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 01:24:32 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 01:24:32 np0005539508 python3.9[106903]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:24:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 01:24:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 01:24:34 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 01:24:34 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 01:24:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:34.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:35 np0005539508 python3.9[107057]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 01:24:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:35.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 01:24:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 01:24:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 01:24:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 01:24:36 np0005539508 python3.9[107207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:24:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:37 np0005539508 python3.9[107366]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:24:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:37.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 11 op/s; 29 B/s, 0 objects/s recovering
Nov 29 01:24:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 01:24:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 01:24:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 01:24:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 01:24:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 01:24:39 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 01:24:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 01:24:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:39.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 01:24:40 np0005539508 python3.9[107572]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 01:24:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 01:24:40 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=117) [1] r=0 lpr=117 pi=[84,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:24:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 01:24:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 01:24:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 01:24:41 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 01:24:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:24:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 01:24:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 01:24:42 np0005539508 python3.9[107860]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 01:24:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 01:24:42 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 01:24:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:42.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 01:24:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 01:24:43 np0005539508 python3.9[108011]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:24:43 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 01:24:43 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=119) [1] r=0 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:24:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 01:24:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 01:24:44 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 01:24:44 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=120) [1]/[0] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:44 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=120) [1]/[0] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:24:44 np0005539508 python3.9[108166]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:24:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:44.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 01:24:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 01:24:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 01:24:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 01:24:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 01:24:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 01:24:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 01:24:46 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 01:24:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:24:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:24:46 np0005539508 python3.9[108323]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:24:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:46.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 01:24:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 01:24:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 01:24:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 01:24:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:24:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:24:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 01:24:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 01:24:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 01:24:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 01:24:51 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:51 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[71,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:24:51 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:24:51 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[71,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:24:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:51 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=121/122 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:24:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 01:24:52 np0005539508 python3.9[108479]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:24:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 01:24:53 np0005539508 python3.9[108634]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 01:24:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:24:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:24:54
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 29 01:24:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:24:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:24:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 01:24:54 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 01:24:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:55 np0005539508 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 01:24:55 np0005539508 systemd[1]: session-36.scope: Consumed 19.544s CPU time.
Nov 29 01:24:55 np0005539508 systemd-logind[797]: Session 36 logged out. Waiting for processes to exit.
Nov 29 01:24:55 np0005539508 systemd-logind[797]: Removed session 36.
Nov 29 01:24:55 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 123 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=122/123 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:24:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:24:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:24:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:24:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:24:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:24:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 01:24:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:24:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:24:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:24:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:24:59 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 01:24:59 np0005539508 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 01:24:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 01:24:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:24:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:24:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 01:25:00 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 124 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:25:00 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 124 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:25:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 01:25:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 01:25:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 01:25:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 01:25:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 01:25:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 01:25:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 01:25:02 np0005539508 systemd-logind[797]: New session 37 of user zuul.
Nov 29 01:25:02 np0005539508 systemd[1]: Started Session 37 of User zuul.
Nov 29 01:25:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:02.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:03 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 01:25:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 01:25:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 01:25:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:04 np0005539508 python3.9[108867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:25:04 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 125 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=124/125 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:25:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 01:25:05 np0005539508 python3.9[109022]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:25:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 01:25:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:25:06 np0005539508 python3.9[109215]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:25:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:07 np0005539508 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 01:25:07 np0005539508 systemd[1]: session-37.scope: Consumed 2.646s CPU time.
Nov 29 01:25:07 np0005539508 systemd-logind[797]: Session 37 logged out. Waiting for processes to exit.
Nov 29 01:25:07 np0005539508 systemd-logind[797]: Removed session 37.
Nov 29 01:25:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 01:25:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:08 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 01:25:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 01:25:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:08.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:09 np0005539508 podman[109415]: 2025-11-29 06:25:09.078631787 +0000 UTC m=+0.098546893 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:25:09 np0005539508 podman[109415]: 2025-11-29 06:25:09.210464144 +0000 UTC m=+0.230379250 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:25:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:25:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:25:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:09.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:25:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 01:25:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:10 np0005539508 podman[109570]: 2025-11-29 06:25:10.523614247 +0000 UTC m=+0.651432907 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:25:10 np0005539508 podman[109592]: 2025-11-29 06:25:10.699090602 +0000 UTC m=+0.159466541 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:25:10 np0005539508 podman[109570]: 2025-11-29 06:25:10.722627272 +0000 UTC m=+0.850445902 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:25:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:10.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:11 np0005539508 podman[109638]: 2025-11-29 06:25:11.065503543 +0000 UTC m=+0.061375572 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 01:25:11 np0005539508 podman[109638]: 2025-11-29 06:25:11.475486179 +0000 UTC m=+0.471358168 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, version=2.2.4, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2)
Nov 29 01:25:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:25:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 01:25:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:25:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:25:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:12.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:13.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 01:25:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:14 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 01:25:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:25:15 np0005539508 systemd-logind[797]: New session 38 of user zuul.
Nov 29 01:25:15 np0005539508 systemd[1]: Started Session 38 of User zuul.
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 01:25:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:25:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 01:25:16 np0005539508 python3.9[109825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:25:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 01:25:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:17 np0005539508 python3.9[109980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:25:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 01:25:18 np0005539508 python3.9[110138]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 01:25:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:19 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 01:25:19 np0005539508 python3.9[110323]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:25:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 01:25:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:19.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:25:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:25:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:25:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:25:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:25:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:21.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:21 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=129) [1] r=0 lpr=129 pi=[78,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:25:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 01:25:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev fdd25b2b-c4ab-4a08-b45f-a07c6dcc6a00 does not exist
Nov 29 01:25:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev d853bfb0-e303-41cb-90dd-f6e85a9398f0 does not exist
Nov 29 01:25:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 46fcd6cd-2bac-4c20-92c6-eefc04520e9e does not exist
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:25:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:25:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:23.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:23 np0005539508 podman[110670]: 2025-11-29 06:25:23.176959406 +0000 UTC m=+0.037794580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:23 np0005539508 python3.9[110713]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:25:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 01:25:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:23 np0005539508 podman[110670]: 2025-11-29 06:25:23.990855283 +0000 UTC m=+0.851690427 container create e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:25:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:25:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 01:25:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:25:24 np0005539508 systemd[1]: Started libpod-conmon-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope.
Nov 29 01:25:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 01:25:24 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 01:25:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[78,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:25:24 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[78,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:24 np0005539508 podman[110670]: 2025-11-29 06:25:24.143018014 +0000 UTC m=+1.003853158 container init e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 01:25:24 np0005539508 podman[110670]: 2025-11-29 06:25:24.15060238 +0000 UTC m=+1.011437514 container start e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 01:25:24 np0005539508 podman[110670]: 2025-11-29 06:25:24.154434625 +0000 UTC m=+1.015269759 container attach e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:25:24 np0005539508 heuristic_gauss[110783]: 167 167
Nov 29 01:25:24 np0005539508 systemd[1]: libpod-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope: Deactivated successfully.
Nov 29 01:25:24 np0005539508 podman[110670]: 2025-11-29 06:25:24.156733957 +0000 UTC m=+1.017569091 container died e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:24 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4665e3c56c94b852808c214b15f6df8da5ef077be2b10e9bbd0f13f290823647-merged.mount: Deactivated successfully.
Nov 29 01:25:24 np0005539508 podman[110670]: 2025-11-29 06:25:24.320456831 +0000 UTC m=+1.181291965 container remove e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:25:24 np0005539508 systemd[1]: libpod-conmon-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope: Deactivated successfully.
Nov 29 01:25:24 np0005539508 podman[110829]: 2025-11-29 06:25:24.451556859 +0000 UTC m=+0.019131272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:24 np0005539508 podman[110829]: 2025-11-29 06:25:24.552223068 +0000 UTC m=+0.119797421 container create ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:25:24 np0005539508 systemd[1]: Started libpod-conmon-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope.
Nov 29 01:25:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:24 np0005539508 podman[110829]: 2025-11-29 06:25:24.638226338 +0000 UTC m=+0.205800771 container init ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:25:24 np0005539508 podman[110829]: 2025-11-29 06:25:24.648064486 +0000 UTC m=+0.215638869 container start ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:25:24 np0005539508 podman[110829]: 2025-11-29 06:25:24.694557001 +0000 UTC m=+0.262131364 container attach ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:25:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 01:25:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 01:25:25 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 01:25:25 np0005539508 python3.9[110956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:25:25 np0005539508 priceless_mcnulty[110873]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:25:25 np0005539508 priceless_mcnulty[110873]: --> relative data size: 1.0
Nov 29 01:25:25 np0005539508 priceless_mcnulty[110873]: --> All data devices are unavailable
Nov 29 01:25:25 np0005539508 systemd[1]: libpod-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope: Deactivated successfully.
Nov 29 01:25:25 np0005539508 podman[110829]: 2025-11-29 06:25:25.485539075 +0000 UTC m=+1.053113438 container died ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:25:25 np0005539508 systemd[1]: var-lib-containers-storage-overlay-65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d-merged.mount: Deactivated successfully.
Nov 29 01:25:25 np0005539508 podman[110829]: 2025-11-29 06:25:25.67273157 +0000 UTC m=+1.240305933 container remove ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:25:25 np0005539508 systemd[1]: libpod-conmon-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope: Deactivated successfully.
Nov 29 01:25:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 01:25:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:26 np0005539508 python3.9[111234]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.281215297 +0000 UTC m=+0.022609516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.512626725 +0000 UTC m=+0.254020934 container create b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:25:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 01:25:26 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 01:25:26 np0005539508 systemd[1]: Started libpod-conmon-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope.
Nov 29 01:25:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 133 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:25:26 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 133 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:25:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.692919721 +0000 UTC m=+0.434313950 container init b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.701958936 +0000 UTC m=+0.443353145 container start b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.706762397 +0000 UTC m=+0.448156606 container attach b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:25:26 np0005539508 eloquent_wilson[111379]: 167 167
Nov 29 01:25:26 np0005539508 systemd[1]: libpod-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope: Deactivated successfully.
Nov 29 01:25:26 np0005539508 podman[111287]: 2025-11-29 06:25:26.71126003 +0000 UTC m=+0.452654239 container died b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:25:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:25:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-69a207eb2644393141811213d70d756919bb8b9352533d88782a8c8f64c0a281-merged.mount: Deactivated successfully.
Nov 29 01:25:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:27.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:27 np0005539508 python3.9[111470]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:25:27 np0005539508 podman[111287]: 2025-11-29 06:25:27.226587203 +0000 UTC m=+0.967981412 container remove b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:25:27 np0005539508 systemd[1]: libpod-conmon-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope: Deactivated successfully.
Nov 29 01:25:27 np0005539508 podman[111528]: 2025-11-29 06:25:27.420737246 +0000 UTC m=+0.059533171 container create 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:25:27 np0005539508 systemd[1]: Started libpod-conmon-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope.
Nov 29 01:25:27 np0005539508 podman[111528]: 2025-11-29 06:25:27.385123447 +0000 UTC m=+0.023919382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:27 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:27 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:27 np0005539508 podman[111528]: 2025-11-29 06:25:27.499294814 +0000 UTC m=+0.138090759 container init 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:25:27 np0005539508 podman[111528]: 2025-11-29 06:25:27.511437564 +0000 UTC m=+0.150233489 container start 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:25:27 np0005539508 podman[111528]: 2025-11-29 06:25:27.515141475 +0000 UTC m=+0.153937400 container attach 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:25:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 01:25:27 np0005539508 python3.9[111569]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:25:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 01:25:27 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 01:25:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]: {
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:    "1": [
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:        {
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "devices": [
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "/dev/loop3"
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            ],
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "lv_name": "ceph_lv0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "lv_size": "7511998464",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "name": "ceph_lv0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "tags": {
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.cluster_name": "ceph",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.crush_device_class": "",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.encrypted": "0",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.osd_id": "1",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.type": "block",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:                "ceph.vdo": "0"
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            },
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "type": "block",
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:            "vg_name": "ceph_vg0"
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:        }
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]:    ]
Nov 29 01:25:28 np0005539508 hopeful_blackwell[111572]: }
Nov 29 01:25:28 np0005539508 systemd[1]: libpod-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope: Deactivated successfully.
Nov 29 01:25:28 np0005539508 podman[111528]: 2025-11-29 06:25:28.3444086 +0000 UTC m=+0.983204525 container died 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:25:28 np0005539508 python3.9[111729]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:25:28 np0005539508 python3.9[111821]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:25:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:25:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:25:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:30 np0005539508 python3.9[111974]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:25:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 170 B/s wr, 15 op/s; 109 B/s, 2 objects/s recovering
Nov 29 01:25:30 np0005539508 python3.9[112126]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:25:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:31 np0005539508 python3.9[112279]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:25:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 01:25:31 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:25:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:32 np0005539508 systemd[1]: var-lib-containers-storage-overlay-395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8-merged.mount: Deactivated successfully.
Nov 29 01:25:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 62 B/s, 0 objects/s recovering
Nov 29 01:25:32 np0005539508 python3.9[112432]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:25:32 np0005539508 podman[111528]: 2025-11-29 06:25:32.446607777 +0000 UTC m=+5.085403702 container remove 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:25:32 np0005539508 systemd[1]: libpod-conmon-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope: Deactivated successfully.
Nov 29 01:25:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:33 np0005539508 podman[112670]: 2025-11-29 06:25:33.024220195 +0000 UTC m=+0.022724339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:33 np0005539508 podman[112670]: 2025-11-29 06:25:33.23682429 +0000 UTC m=+0.235328414 container create 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:25:33 np0005539508 python3.9[112736]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:25:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:25:34 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:34 np0005539508 systemd[1]: Started libpod-conmon-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope.
Nov 29 01:25:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 01:25:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:35.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:35 np0005539508 podman[112670]: 2025-11-29 06:25:35.321024534 +0000 UTC m=+2.319528728 container init 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:25:35 np0005539508 podman[112670]: 2025-11-29 06:25:35.332808795 +0000 UTC m=+2.331312959 container start 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:25:35 np0005539508 bold_elbakyan[112740]: 167 167
Nov 29 01:25:35 np0005539508 systemd[1]: libpod-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope: Deactivated successfully.
Nov 29 01:25:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:36 np0005539508 podman[112670]: 2025-11-29 06:25:36.229194576 +0000 UTC m=+3.227698710 container attach 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:25:36 np0005539508 podman[112670]: 2025-11-29 06:25:36.230501502 +0000 UTC m=+3.229005666 container died 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:25:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:25:36 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:37 np0005539508 python3.9[112910]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:25:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:38 np0005539508 python3.9[113065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:25:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:38 np0005539508 python3.9[113217]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:25:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:39.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:39 np0005539508 python3.9[113370]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:25:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:39.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 01:25:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-fa815d34c3e44f00ca9de99b70b70bc562b0283776134a8925600a9a3ec8ed36-merged.mount: Deactivated successfully.
Nov 29 01:25:40 np0005539508 podman[112670]: 2025-11-29 06:25:40.515994028 +0000 UTC m=+7.514498172 container remove 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:25:40 np0005539508 systemd[1]: libpod-conmon-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope: Deactivated successfully.
Nov 29 01:25:40 np0005539508 podman[113582]: 2025-11-29 06:25:40.665751504 +0000 UTC m=+0.028008644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:25:40 np0005539508 python3.9[113576]: ansible-service_facts Invoked
Nov 29 01:25:40 np0005539508 network[113613]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:25:40 np0005539508 network[113614]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:25:40 np0005539508 network[113615]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:25:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:41.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:41 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:25:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:41 np0005539508 podman[113582]: 2025-11-29 06:25:41.256639743 +0000 UTC m=+0.618896863 container create d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:25:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:42 np0005539508 systemd[1]: Started libpod-conmon-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope.
Nov 29 01:25:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:25:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:25:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 01:25:43 np0005539508 podman[113582]: 2025-11-29 06:25:43.016765798 +0000 UTC m=+2.379022938 container init d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:25:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:43.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:43 np0005539508 podman[113582]: 2025-11-29 06:25:43.035123748 +0000 UTC m=+2.397380868 container start d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:25:43 np0005539508 podman[113582]: 2025-11-29 06:25:43.398762363 +0000 UTC m=+2.761019493 container attach d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:25:43 np0005539508 epic_snyder[113644]: {
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:        "osd_id": 1,
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:        "type": "bluestore"
Nov 29 01:25:43 np0005539508 epic_snyder[113644]:    }
Nov 29 01:25:43 np0005539508 epic_snyder[113644]: }
Nov 29 01:25:43 np0005539508 systemd[1]: libpod-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope: Deactivated successfully.
Nov 29 01:25:43 np0005539508 podman[113582]: 2025-11-29 06:25:43.889124177 +0000 UTC m=+3.251381307 container died d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 01:25:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:43.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:45.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 01:25:46 np0005539508 systemd[1]: var-lib-containers-storage-overlay-39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20-merged.mount: Deactivated successfully.
Nov 29 01:25:46 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 01:25:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:25:46 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:25:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:48 np0005539508 python3.9[114109]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:25:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:25:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:25:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:49.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:25:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 01:25:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:51.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 01:25:51 np0005539508 podman[113582]: 2025-11-29 06:25:51.607958961 +0000 UTC m=+10.970216111 container remove d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:25:51 np0005539508 systemd[1]: libpod-conmon-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope: Deactivated successfully.
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 01:25:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:25:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:52 np0005539508 python3.9[114265]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 01:25:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:25:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6ac88d45-3d8a-4824-ba5d-33b78eb582e9 does not exist
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev cc71a288-4ddc-46fd-a55c-e9f907082bdb does not exist
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev d8c8119c-3329-41d0-af59-22fcd62acf40 does not exist
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:25:54
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:54 np0005539508 python3.9[114418]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:25:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:25:54 np0005539508 python3.9[114546]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:25:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 01:25:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:55 np0005539508 python3.9[114699]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:25:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:25:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:25:56 np0005539508 python3.9[114777]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:25:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:25:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:25:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:25:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:25:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:25:58 np0005539508 python3.9[114930]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:25:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:25:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:25:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:25:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:25:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 01:25:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:26:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:00 np0005539508 python3.9[115083]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:26:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 01:26:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:01 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 01:26:01 np0005539508 python3.9[115218]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:26:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:26:02 np0005539508 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 01:26:02 np0005539508 systemd[1]: session-38.scope: Consumed 26.215s CPU time.
Nov 29 01:26:02 np0005539508 systemd-logind[797]: Session 38 logged out. Waiting for processes to exit.
Nov 29 01:26:02 np0005539508 systemd-logind[797]: Removed session 38.
Nov 29 01:26:02 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:26:02 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:26:02 np0005539508 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 01:26:02 np0005539508 systemd[1]: session-18.scope: Consumed 1min 24.311s CPU time.
Nov 29 01:26:02 np0005539508 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Nov 29 01:26:02 np0005539508 systemd-logind[797]: Removed session 18.
Nov 29 01:26:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 01:26:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 01:26:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:05 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 01:26:05 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:26:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:26:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1703 writes, 8021 keys, 1703 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1703 writes, 1703 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1703 writes, 8021 keys, 1703 commit groups, 1.0 writes per commit group, ingest: 11.38 MB, 0.02 MB/s#012Interval WAL: 1703 writes, 1703 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 57.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(10,56.30 KB,0.0180847%) FilterBlock(2,0.42 KB,0.000135522%) IndexBlock(2,0.36 KB,0.000115445%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 01:26:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:08 np0005539508 systemd-logind[797]: New session 39 of user zuul.
Nov 29 01:26:08 np0005539508 systemd[1]: Started Session 39 of User zuul.
Nov 29 01:26:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:09 np0005539508 python3.9[115406]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:10 np0005539508 python3.9[115558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:11.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:11 np0005539508 python3.9[115637]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:11 np0005539508 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 01:26:11 np0005539508 systemd[1]: session-39.scope: Consumed 1.804s CPU time.
Nov 29 01:26:11 np0005539508 systemd-logind[797]: Session 39 logged out. Waiting for processes to exit.
Nov 29 01:26:11 np0005539508 systemd-logind[797]: Removed session 39.
Nov 29 01:26:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:26:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:26:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:13.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:14.657424) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397574657535, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 8329, "num_deletes": 251, "total_data_size": 12098626, "memory_usage": 12307704, "flush_reason": "Manual Compaction"}
Nov 29 01:26:14 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397575008751, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 10230000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 8465, "table_properties": {"data_size": 10196642, "index_size": 22363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 94444, "raw_average_key_size": 23, "raw_value_size": 10119073, "raw_average_value_size": 2558, "num_data_blocks": 978, "num_entries": 3955, "num_filter_entries": 3955, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396967, "oldest_key_time": 1764396967, "file_creation_time": 1764397574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 351616 microseconds, and 21586 cpu microseconds.
Nov 29 01:26:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.009040) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 10230000 bytes OK
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.009127) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324327) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324385) EVENT_LOG_v1 {"time_micros": 1764397575324375, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12060700, prev total WAL file size 12061854, number of live WAL files 2.
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.327088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(9990KB) 13(53KB) 8(1944B)]
Nov 29 01:26:15 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397575327208, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 10286793, "oldest_snapshot_seqno": -1}
Nov 29 01:26:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:16.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3767 keys, 10242256 bytes, temperature: kUnknown
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576121445, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 10242256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10209349, "index_size": 22365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92349, "raw_average_key_size": 24, "raw_value_size": 10133496, "raw_average_value_size": 2690, "num_data_blocks": 982, "num_entries": 3767, "num_filter_entries": 3767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:26:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.121731) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 10242256 bytes
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.207478) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 13.0 rd, 12.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(9.8, 0.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 4059, records dropped: 292 output_compression: NoCompression
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.207551) EVENT_LOG_v1 {"time_micros": 1764397576207527, "job": 4, "event": "compaction_finished", "compaction_time_micros": 794327, "compaction_time_cpu_micros": 24489, "output_level": 6, "num_output_files": 1, "total_output_size": 10242256, "num_input_records": 4059, "num_output_records": 3767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576212695, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576213022, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576213112, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 01:26:16 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.326947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:26:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:17 np0005539508 systemd-logind[797]: New session 40 of user zuul.
Nov 29 01:26:17 np0005539508 systemd[1]: Started Session 40 of User zuul.
Nov 29 01:26:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 01:26:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:19.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:19 np0005539508 python3.9[115820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:26:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:20 np0005539508 python3.9[115977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:21 np0005539508 python3.9[116203]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:22.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:22 np0005539508 python3.9[116281]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.06e0gsw3 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:23 np0005539508 python3.9[116434]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:24 np0005539508 python3.9[116512]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dg5z02zt recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:25.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:25 np0005539508 python3.9[116665]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:26:26 np0005539508 python3.9[116817]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:26.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:26 np0005539508 python3.9[116895]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:26:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:27.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:27 np0005539508 python3.9[117048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:27 np0005539508 python3.9[117126]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:26:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:28 np0005539508 python3.9[117282]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:29.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:29 np0005539508 python3.9[117435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:26:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:26:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:29 np0005539508 python3.9[117513]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:30.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:30 np0005539508 python3.9[117667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:26:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:26:31 np0005539508 python3.9[117746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:32 np0005539508 python3.9[117898]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:26:32 np0005539508 systemd[1]: Reloading.
Nov 29 01:26:32 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:26:32 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:26:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:33 np0005539508 python3.9[118088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:34 np0005539508 python3.9[118166]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:35 np0005539508 python3.9[118319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:35 np0005539508 python3.9[118397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:36 np0005539508 python3.9[118549]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:26:36 np0005539508 systemd[1]: Reloading.
Nov 29 01:26:36 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:26:36 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:26:36 np0005539508 systemd[1]: Starting Create netns directory...
Nov 29 01:26:36 np0005539508 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 01:26:36 np0005539508 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 01:26:36 np0005539508 systemd[1]: Finished Create netns directory.
Nov 29 01:26:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:37 np0005539508 python3.9[118741]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:26:37 np0005539508 network[118758]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:26:37 np0005539508 network[118759]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:26:37 np0005539508 network[118760]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:26:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:26:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:26:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:43.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:44 np0005539508 python3.9[119075]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:45 np0005539508 python3.9[119154]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:49.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:26:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:26:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:50 np0005539508 python3.9[119310]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:26:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:26:51 np0005539508 python3.9[119463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:51 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:26:51 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Nov 29 01:26:52 np0005539508 python3.9[119541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:52.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 9m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 01:26:52 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:26:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:53 np0005539508 python3.9[119694]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:26:53 np0005539508 systemd[1]: Starting Time & Date Service...
Nov 29 01:26:53 np0005539508 systemd[1]: Started Time & Date Service.
Nov 29 01:26:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:54.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:26:54
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', '.mgr', 'images']
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:26:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:26:54 np0005539508 python3.9[119850]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:26:54 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:26:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:55 np0005539508 python3.9[120128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:55 np0005539508 podman[120175]: 2025-11-29 06:26:55.629240098 +0000 UTC m=+0.384062012 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:26:55 np0005539508 python3.9[120264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:55 np0005539508 podman[120175]: 2025-11-29 06:26:55.832021462 +0000 UTC m=+0.586843406 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:26:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:56.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:56 np0005539508 python3.9[120522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:56 np0005539508 podman[120558]: 2025-11-29 06:26:56.428259643 +0000 UTC m=+0.073441724 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:26:56 np0005539508 podman[120558]: 2025-11-29 06:26:56.435910359 +0000 UTC m=+0.081092430 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:26:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:26:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:26:56 np0005539508 podman[120682]: 2025-11-29 06:26:56.749271304 +0000 UTC m=+0.126243564 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, distribution-scope=public)
Nov 29 01:26:56 np0005539508 python3.9[120714]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.p9tbr6va recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:56 np0005539508 podman[120722]: 2025-11-29 06:26:56.88407343 +0000 UTC m=+0.112436015 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 01:26:57 np0005539508 podman[120682]: 2025-11-29 06:26:57.043620093 +0000 UTC m=+0.420592283 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20)
Nov 29 01:26:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:26:57 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:26:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:26:57 np0005539508 python3.9[120887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:26:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:26:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:26:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:26:58 np0005539508 python3.9[120965]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:26:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:26:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:26:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:26:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:26:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:26:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:26:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:26:59 np0005539508 python3.9[121118]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:26:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:26:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:00 np0005539508 python3[121271]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 01:27:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:01 np0005539508 python3.9[121423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:01 np0005539508 python3.9[121552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:02 np0005539508 python3.9[121706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:03 np0005539508 python3.9[121784]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:03 np0005539508 python3.9[121937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:04 np0005539508 python3.9[122066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:05 np0005539508 python3.9[122299]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:05.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:05 np0005539508 python3.9[122377]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:27:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:27:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:08.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:08 np0005539508 python3.9[122529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:08 np0005539508 python3.9[122610]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:09 np0005539508 python3.9[122763]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:10.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:10 np0005539508 python3.9[122918]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:11 np0005539508 python3.9[123071]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:12 np0005539508 python3.9[123223]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 72bf1e0e-faac-4bd8-936b-e080b9ed62a7 does not exist
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b450179a-7254-47e2-b310-cb17131ba156 does not exist
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7f448bd4-3e36-4ca4-b4db-39ed18a8ec9a does not exist
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:27:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:27:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:13 np0005539508 python3.9[123376]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:27:13 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:27:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:14.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:14 np0005539508 python3.9[123530]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 01:27:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:27:14 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.60876624 +0000 UTC m=+0.043190313 container create d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:27:14 np0005539508 systemd[1]: Started libpod-conmon-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope.
Nov 29 01:27:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.588915913 +0000 UTC m=+0.023340016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.694942837 +0000 UTC m=+0.129366960 container init d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.703195748 +0000 UTC m=+0.137619821 container start d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.707028095 +0000 UTC m=+0.141452248 container attach d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:27:14 np0005539508 happy_williams[123705]: 167 167
Nov 29 01:27:14 np0005539508 systemd[1]: libpod-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope: Deactivated successfully.
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.709891056 +0000 UTC m=+0.144315129 container died d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:27:14 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b7168389f294baf05d30a68f9b029067e062d02a9184ad8a8e13c4d03f67d526-merged.mount: Deactivated successfully.
Nov 29 01:27:14 np0005539508 podman[123689]: 2025-11-29 06:27:14.777642726 +0000 UTC m=+0.212066799 container remove d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:27:14 np0005539508 systemd[1]: libpod-conmon-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope: Deactivated successfully.
Nov 29 01:27:14 np0005539508 systemd-logind[797]: Session 40 logged out. Waiting for processes to exit.
Nov 29 01:27:14 np0005539508 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 01:27:14 np0005539508 systemd[1]: session-40.scope: Consumed 33.425s CPU time.
Nov 29 01:27:14 np0005539508 systemd-logind[797]: Removed session 40.
Nov 29 01:27:14 np0005539508 podman[123729]: 2025-11-29 06:27:14.945906035 +0000 UTC m=+0.047542014 container create 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:27:14 np0005539508 systemd[1]: Started libpod-conmon-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope.
Nov 29 01:27:15 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:15 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:15 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:15 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:15 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:15 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:15 np0005539508 podman[123729]: 2025-11-29 06:27:14.927922891 +0000 UTC m=+0.029558890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:15 np0005539508 podman[123729]: 2025-11-29 06:27:15.030153628 +0000 UTC m=+0.131789737 container init 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:27:15 np0005539508 podman[123729]: 2025-11-29 06:27:15.03842831 +0000 UTC m=+0.140064329 container start 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:15 np0005539508 podman[123729]: 2025-11-29 06:27:15.046233439 +0000 UTC m=+0.147869418 container attach 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:27:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:15.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:15 np0005539508 quirky_benz[123746]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:27:15 np0005539508 quirky_benz[123746]: --> relative data size: 1.0
Nov 29 01:27:15 np0005539508 quirky_benz[123746]: --> All data devices are unavailable
Nov 29 01:27:15 np0005539508 systemd[1]: libpod-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope: Deactivated successfully.
Nov 29 01:27:15 np0005539508 podman[123729]: 2025-11-29 06:27:15.872050948 +0000 UTC m=+0.973686957 container died 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:27:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:16.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:16 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265-merged.mount: Deactivated successfully.
Nov 29 01:27:16 np0005539508 podman[123729]: 2025-11-29 06:27:16.818689328 +0000 UTC m=+1.920325347 container remove 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:27:16 np0005539508 systemd[1]: libpod-conmon-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope: Deactivated successfully.
Nov 29 01:27:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.45418851 +0000 UTC m=+0.048603384 container create 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:27:17 np0005539508 systemd[1]: Started libpod-conmon-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope.
Nov 29 01:27:17 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.429201599 +0000 UTC m=+0.023616493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.543297199 +0000 UTC m=+0.137712083 container init 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.548924397 +0000 UTC m=+0.143339271 container start 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:27:17 np0005539508 silly_moser[123933]: 167 167
Nov 29 01:27:17 np0005539508 systemd[1]: libpod-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope: Deactivated successfully.
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.554574655 +0000 UTC m=+0.148989549 container attach 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.554939636 +0000 UTC m=+0.149354510 container died 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:27:17 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f390cddfadcf191abad773eb9bfb10c331fbcc824ac4088e5533abb40c700ba0-merged.mount: Deactivated successfully.
Nov 29 01:27:17 np0005539508 podman[123917]: 2025-11-29 06:27:17.633383035 +0000 UTC m=+0.227797939 container remove 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:27:17 np0005539508 systemd[1]: libpod-conmon-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope: Deactivated successfully.
Nov 29 01:27:17 np0005539508 podman[123958]: 2025-11-29 06:27:17.858445787 +0000 UTC m=+0.070299432 container create aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:27:17 np0005539508 systemd[1]: Started libpod-conmon-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope.
Nov 29 01:27:17 np0005539508 podman[123958]: 2025-11-29 06:27:17.829589638 +0000 UTC m=+0.041443343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:17 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:17 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:17 np0005539508 podman[123958]: 2025-11-29 06:27:17.952970308 +0000 UTC m=+0.164823953 container init aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:27:17 np0005539508 podman[123958]: 2025-11-29 06:27:17.965400207 +0000 UTC m=+0.177253852 container start aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:17 np0005539508 podman[123958]: 2025-11-29 06:27:17.969713318 +0000 UTC m=+0.181566963 container attach aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]: {
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:    "1": [
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:        {
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "devices": [
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "/dev/loop3"
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            ],
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "lv_name": "ceph_lv0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "lv_size": "7511998464",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "name": "ceph_lv0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "tags": {
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.cluster_name": "ceph",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.crush_device_class": "",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.encrypted": "0",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.osd_id": "1",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.type": "block",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:                "ceph.vdo": "0"
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            },
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "type": "block",
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:            "vg_name": "ceph_vg0"
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:        }
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]:    ]
Nov 29 01:27:18 np0005539508 stupefied_shockley[123974]: }
Nov 29 01:27:18 np0005539508 systemd[1]: libpod-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope: Deactivated successfully.
Nov 29 01:27:18 np0005539508 podman[123958]: 2025-11-29 06:27:18.815131357 +0000 UTC m=+1.026985002 container died aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:27:18 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:27:18 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5-merged.mount: Deactivated successfully.
Nov 29 01:27:18 np0005539508 podman[123958]: 2025-11-29 06:27:18.932242262 +0000 UTC m=+1.144095887 container remove aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:18 np0005539508 systemd[1]: libpod-conmon-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope: Deactivated successfully.
Nov 29 01:27:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.550314576 +0000 UTC m=+0.037763870 container create 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:27:19 np0005539508 systemd[1]: Started libpod-conmon-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope.
Nov 29 01:27:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.615590677 +0000 UTC m=+0.103040061 container init 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.622052518 +0000 UTC m=+0.109501812 container start 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:27:19 np0005539508 sweet_dewdney[124153]: 167 167
Nov 29 01:27:19 np0005539508 systemd[1]: libpod-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope: Deactivated successfully.
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.625294299 +0000 UTC m=+0.112743623 container attach 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.626291977 +0000 UTC m=+0.113741271 container died 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.533326509 +0000 UTC m=+0.020775823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:19 np0005539508 systemd[1]: var-lib-containers-storage-overlay-61a6595820a20621f1204b3d269d698e651b0429ab87774726b654c319fb5d06-merged.mount: Deactivated successfully.
Nov 29 01:27:19 np0005539508 podman[124137]: 2025-11-29 06:27:19.697528485 +0000 UTC m=+0.184977799 container remove 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:27:19 np0005539508 systemd[1]: libpod-conmon-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope: Deactivated successfully.
Nov 29 01:27:19 np0005539508 podman[124180]: 2025-11-29 06:27:19.873643724 +0000 UTC m=+0.043046658 container create b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:27:19 np0005539508 systemd[1]: Started libpod-conmon-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope.
Nov 29 01:27:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:27:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:27:19 np0005539508 podman[124180]: 2025-11-29 06:27:19.943214905 +0000 UTC m=+0.112617839 container init b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:27:19 np0005539508 podman[124180]: 2025-11-29 06:27:19.852037778 +0000 UTC m=+0.021440742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:27:19 np0005539508 podman[124180]: 2025-11-29 06:27:19.952602728 +0000 UTC m=+0.122005662 container start b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:19 np0005539508 podman[124180]: 2025-11-29 06:27:19.958990707 +0000 UTC m=+0.128393651 container attach b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:27:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]: {
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:        "osd_id": 1,
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:        "type": "bluestore"
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]:    }
Nov 29 01:27:20 np0005539508 gallant_hellman[124196]: }
Nov 29 01:27:20 np0005539508 systemd[1]: libpod-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope: Deactivated successfully.
Nov 29 01:27:20 np0005539508 podman[124180]: 2025-11-29 06:27:20.802574526 +0000 UTC m=+0.971977650 container died b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:27:20 np0005539508 systemd[1]: var-lib-containers-storage-overlay-83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e-merged.mount: Deactivated successfully.
Nov 29 01:27:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:20 np0005539508 podman[124180]: 2025-11-29 06:27:20.927766557 +0000 UTC m=+1.097169531 container remove b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:27:20 np0005539508 systemd[1]: libpod-conmon-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope: Deactivated successfully.
Nov 29 01:27:20 np0005539508 systemd-logind[797]: New session 41 of user zuul.
Nov 29 01:27:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:27:21 np0005539508 systemd[1]: Started Session 41 of User zuul.
Nov 29 01:27:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:27:21 np0005539508 python3.9[124387]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 01:27:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 097b7174-1e34-4025-beab-e4816f31426c does not exist
Nov 29 01:27:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 3d80ede6-bb50-4904-ace1-ae7f8815591e does not exist
Nov 29 01:27:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev ca77568d-54fe-462f-9425-9c5ee7e1767c does not exist
Nov 29 01:27:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:27:22 np0005539508 python3.9[124639]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:27:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:23 np0005539508 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:27:23 np0005539508 python3.9[124794]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 01:27:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:24 np0005539508 python3.9[124948]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1zi7txwq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:27:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:25 np0005539508 python3.9[125074]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1zi7txwq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397643.9081905-107-104677036883407/.source.1zi7txwq _original_basename=.f0owt40_ follow=False checksum=b291f010aefff8b88f41011b780271a83fd1182f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:26 np0005539508 python3.9[125226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:27:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:27 np0005539508 python3.9[125379]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GXKCQiCwQEMihcSwDVeJtG2CpTemmA6MTbtOkxbB3OAV5PK8v8imPvDGMDurfGFQG0RzWyv9szlMJXdgIkwejIfy/AY7p6nemHOpu6DdAx0EA/jg1YcOIeeEhyMw1/oFzjYClGMohaI1oTKHtR29UXWphTAroOkf26Exvco6hh2ApRTXV9ObzSoOyCC7+OZcOWgYzdoCfu/0FDGkH2ksKLQS7d4AAh/XZ/njXhK57U7ptxHCReUPECGRv7KB4f8TelZDAIeUyp7ngd/9ivUDO1zue1Qr9ECzTzAFqippGXFmYl3+oSid03CY7bqnxav4xWt7UukbaO57goyIPfkklPdC1kA7kZqa9bqeDU1WgDkqnLu8hluArB0Y0Jz+hDfx9pTbAL6MklraoLaGrnrgcibAollAN+7WGqdWxUotENYaljO7P1Z18MlNllWFzk4Le5jMLNL8qArSlzM+ufOThnLdGEuYZhH1x969AisGQ4MQWn0P0lZFu6fE5VSNA/k=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDdPWx5WoFJTxz6PiFZL5f3XrtE682RjGFiIpoe0LXZO#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQlZMweHfLYiJFtm1r2tQze/oNx6KzgaXkK+Kof7POk0cFMLbTsXU8qgbQMh4o5LVO0Hbas4mAqxRkGcFCg2Po=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX0dhB1m0xL0qEi5jnTQLLB4bvueVV5foNrqU/OkfV/4gRyp7uP2q21lWq5Dtl2GLk51pS6oD41RI41Y5g7OSRs8b1Z66d6X1QgX0Qns6pv7FwmNSQ25+2VGV6lppnaN5e+JHiwTmzpf82hl/MiiJrHo7B63mllKyl9SZJxUhP9RR4czS3QNYQsZyP7sZeCWothTZ2Q/GK4BWBEtj2+ifeOpa342IivopCH05YVQOx9bpsdFHMYaalMDCwvr2lfVns8aTcpJ3z9uE8wLdKWTyiinT7nuLX6RuPwhXB2proBRH1wrGSIUgcVcizkWn8QizD8LlsGFcHIQJkmq+sJz6r7cCZLIfS6hdAzI+hYbJie6n/agwfxe4r+mbXsmmC6ALKKk7CEnaiNnDg0fgTaUfBPwSfu+JmVrjdSO+S8f/CMbtYeO6QknOxhLV9oK6knszv7nLlSYXTzXanHkN4Y0fW3dsSvoE+qDR0YijbbT8slqMd6z95wWVDFUmTcN8Nzk8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILci1PI4hoB56+xxS5gSMKceuJ/dv6t7etpmtENwoSFr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJIaOLr2ntjSUcigXC7a0sFoonsuh0ChCx2a1R6G8EDmJ8/ZB8NEiJE6KAQJDNU5XsXjuaC44eJhOUMRK9r98xA=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUVpPatup3d17omeiTdJaYR8jCcDbraJSPBxWy49Wxst4G+6/lD41HVIKmjgCgIbbmYSFBPQmoXt4gFXP4FRKna6AbQWi0kwF3/T2biQ2qCid0HVDSS8YRVlyrpdVc1/bIg6YNLkGnhzOMp0S1443+cg5PqutAbrAT1LOg6lSBu+K9gIqJ4un3l2guSweoyba5UhMyjrq4Pffx1QCuBggtYSjmA9Q1r5VVNc2J7AbP0QuzOe6J6DhpdGJsfmHDVXZb/4b/aPUdCTKkLseyUtcqElWVhhnGnpYSJdN81ejalSktGHE4JRHih19wwTokiKvoczUgijBzOfl+kt2ELcpDgzpzY0M9yd0Zz7wrK4rLM6hi8x3LYZXZv8N7KnawUcJ2jfzilx1BVLdNzgwDNB7ZlP4O9Vs3fKnBufCUFPNcRyWl6ooczepbgxqgSbr/Ham2O4/qzvJmzLtu0KxBkaFALRWnyM39nYVE/jrMKJ5ihtVDxIY9FGma/Jifg15gqI0=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN19pK3a7AH/OiwlqJTVWP/qzU/QzkC16s4D1xY1Vn6J#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsXsjJNPVMX1YVTe2oBmcZpUSiv3HOeuICgZtQun4hTopMXH9dE1jQeUruGwqZ+NsKW6X2bLZZJ0/tcn2owL8Q=#012 create=True mode=0644 path=/tmp/ansible.1zi7txwq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:28 np0005539508 python3.9[125531]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1zi7txwq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:29 np0005539508 python3.9[125686]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1zi7txwq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:27:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:27:30 np0005539508 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 01:27:30 np0005539508 systemd[1]: session-41.scope: Consumed 5.231s CPU time.
Nov 29 01:27:30 np0005539508 systemd-logind[797]: Session 41 logged out. Waiting for processes to exit.
Nov 29 01:27:30 np0005539508 systemd-logind[797]: Removed session 41.
Nov 29 01:27:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:31.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:32.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:34 np0005539508 systemd-logind[797]: New session 42 of user zuul.
Nov 29 01:27:34 np0005539508 systemd[1]: Started Session 42 of User zuul.
Nov 29 01:27:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:36 np0005539508 python3.9[125871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:27:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:36.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:37 np0005539508 python3.9[126030]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 01:27:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:38 np0005539508 python3.9[126184]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:27:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:39 np0005539508 python3.9[126338]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:41 np0005539508 python3.9[126494]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:27:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:42.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:42 np0005539508 python3.9[126695]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:42 np0005539508 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 01:27:42 np0005539508 systemd[1]: session-42.scope: Consumed 4.337s CPU time.
Nov 29 01:27:42 np0005539508 systemd-logind[797]: Session 42 logged out. Waiting for processes to exit.
Nov 29 01:27:42 np0005539508 systemd-logind[797]: Removed session 42.
Nov 29 01:27:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:48 np0005539508 systemd-logind[797]: New session 43 of user zuul.
Nov 29 01:27:48 np0005539508 systemd[1]: Started Session 43 of User zuul.
Nov 29 01:27:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:49 np0005539508 python3.9[126882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:27:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:50 np0005539508 python3.9[127040]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:27:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:51 np0005539508 python3.9[127125]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 01:27:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:52.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:27:54
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:27:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:27:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:27:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:27:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:27:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:27:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:27:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:27:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:27:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:27:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:02 np0005539508 python3.9[127285]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:28:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:03 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:28:03 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Nov 29 01:28:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:28:03 np0005539508 python3.9[127487]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:28:03 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 11m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 01:28:04 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:28:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:04 np0005539508 python3.9[127637]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:28:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:05 np0005539508 python3.9[127788]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:28:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:06 np0005539508 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 01:28:06 np0005539508 systemd[1]: session-43.scope: Consumed 6.084s CPU time.
Nov 29 01:28:06 np0005539508 systemd-logind[797]: Session 43 logged out. Waiting for processes to exit.
Nov 29 01:28:06 np0005539508 systemd-logind[797]: Removed session 43.
Nov 29 01:28:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:06.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:08.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:08 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:28:08 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:28:08 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:28:08 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:28:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:11 np0005539508 systemd-logind[797]: New session 44 of user zuul.
Nov 29 01:28:11 np0005539508 systemd[1]: Started Session 44 of User zuul.
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:12 np0005539508 python3.9[127971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:28:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:28:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:13.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:14 np0005539508 python3.9[128130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:15 np0005539508 python3.9[128283]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:16 np0005539508 python3.9[128435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:17 np0005539508 python3.9[128558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397695.4621618-161-248581819783985/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8468ae915c8d555809e81a9f592f94c05f7bce7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:17 np0005539508 python3.9[128711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:18.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:18 np0005539508 python3.9[128834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397697.2204843-161-232337174261202/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=03c2952c2692ca442730881904078ac3e566f340 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:19 np0005539508 python3.9[128987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:20 np0005539508 python3.9[129110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397698.755487-161-277762538179837/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a644651d7a189f3c2f7043d8997cdf89e60c7bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:20.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:20 np0005539508 python3.9[129262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:21 np0005539508 python3.9[129415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:22 np0005539508 python3.9[129567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:28:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:28:23 np0005539508 python3.9[129857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397701.9347355-350-153542495019964/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=aefce5813a5a721e088ba4838a64c39201165a8e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:23 np0005539508 python3.9[130013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:28:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:24 np0005539508 python3.9[130136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397703.308839-350-70183169409384/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=446989bd92736b57ebc923ce429d8effafd00e68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:28:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:25 np0005539508 python3.9[130306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:25 np0005539508 python3.9[130529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397704.6557233-350-240121221266407/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ae6872864caab8d678a666cf230eafbe2b2e1e47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:28:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 92a96287-3c5e-43fd-ab4b-b2d05a07e5e8 does not exist
Nov 29 01:28:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7143ed74-3866-491b-90e0-d5351a358d03 does not exist
Nov 29 01:28:26 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f202a588-3d41-4acf-b71d-dbe2d35f56ba does not exist
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:26 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:28:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:26 np0005539508 python3.9[130795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:26 np0005539508 podman[130837]: 2025-11-29 06:28:26.62195352 +0000 UTC m=+0.043503285 container create 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:28:26 np0005539508 systemd[1]: Started libpod-conmon-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope.
Nov 29 01:28:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:26 np0005539508 podman[130837]: 2025-11-29 06:28:26.60689763 +0000 UTC m=+0.028447415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:26 np0005539508 podman[130837]: 2025-11-29 06:28:26.867780192 +0000 UTC m=+0.289330047 container init 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:28:26 np0005539508 podman[130837]: 2025-11-29 06:28:26.881751532 +0000 UTC m=+0.303301337 container start 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:28:26 np0005539508 frosty_driscoll[130877]: 167 167
Nov 29 01:28:26 np0005539508 systemd[1]: libpod-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope: Deactivated successfully.
Nov 29 01:28:26 np0005539508 conmon[130877]: conmon 5f5a3165ec0b04c12945 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope/container/memory.events
Nov 29 01:28:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:27 np0005539508 python3.9[131021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:27 np0005539508 podman[130837]: 2025-11-29 06:28:27.305249798 +0000 UTC m=+0.726799583 container attach 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:28:27 np0005539508 podman[130837]: 2025-11-29 06:28:27.306140203 +0000 UTC m=+0.727689978 container died 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:28:28 np0005539508 python3.9[131174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:28 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7c0b60d040d5400b0e561120842b21ff4d52d947effc3c5b7cd27fe126208ad0-merged.mount: Deactivated successfully.
Nov 29 01:28:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:28.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:28 np0005539508 python3.9[131297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397707.5365577-526-259588650306240/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=740ccfe5daa9c5421ca02e98e83fd489994437b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:28 np0005539508 podman[130837]: 2025-11-29 06:28:28.955618813 +0000 UTC m=+2.377168618 container remove 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:28:28 np0005539508 systemd[1]: libpod-conmon-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope: Deactivated successfully.
Nov 29 01:28:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:29.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:29 np0005539508 podman[131351]: 2025-11-29 06:28:29.129430246 +0000 UTC m=+0.025391217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:29 np0005539508 podman[131351]: 2025-11-29 06:28:29.333222676 +0000 UTC m=+0.229183637 container create 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:29 np0005539508 systemd[1]: Started libpod-conmon-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope.
Nov 29 01:28:29 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:29 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:29 np0005539508 podman[131351]: 2025-11-29 06:28:29.478262156 +0000 UTC m=+0.374223097 container init 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:28:29 np0005539508 podman[131351]: 2025-11-29 06:28:29.489742354 +0000 UTC m=+0.385703275 container start 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:28:29 np0005539508 podman[131351]: 2025-11-29 06:28:29.493611295 +0000 UTC m=+0.389572236 container attach 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:28:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:28:29 np0005539508 python3.9[131477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:30 np0005539508 gifted_proskuriakova[131445]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:28:30 np0005539508 gifted_proskuriakova[131445]: --> relative data size: 1.0
Nov 29 01:28:30 np0005539508 gifted_proskuriakova[131445]: --> All data devices are unavailable
Nov 29 01:28:30 np0005539508 systemd[1]: libpod-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope: Deactivated successfully.
Nov 29 01:28:30 np0005539508 podman[131351]: 2025-11-29 06:28:30.340508883 +0000 UTC m=+1.236469844 container died 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:28:30 np0005539508 python3.9[131607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397709.1010518-526-226613274559076/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=446989bd92736b57ebc923ce429d8effafd00e68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:31 np0005539508 python3.9[131774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:31 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272-merged.mount: Deactivated successfully.
Nov 29 01:28:31 np0005539508 python3.9[131898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397710.633576-526-264266063279056/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=75b192adbbcf3b531af652912e1c620c8b2fc70c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:32.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:32 np0005539508 podman[131351]: 2025-11-29 06:28:32.492220871 +0000 UTC m=+3.388181832 container remove 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:28:32 np0005539508 systemd[1]: libpod-conmon-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope: Deactivated successfully.
Nov 29 01:28:33 np0005539508 python3.9[132170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.180583495 +0000 UTC m=+0.105518270 container create 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.100501764 +0000 UTC m=+0.025436589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:33 np0005539508 systemd[1]: Started libpod-conmon-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope.
Nov 29 01:28:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.662597755 +0000 UTC m=+0.587532580 container init 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.673786045 +0000 UTC m=+0.598720820 container start 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:28:33 np0005539508 youthful_wescoff[132285]: 167 167
Nov 29 01:28:33 np0005539508 systemd[1]: libpod-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope: Deactivated successfully.
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.683453991 +0000 UTC m=+0.608388826 container attach 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.685054627 +0000 UTC m=+0.609989402 container died 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:28:33 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c12106a6a9fed0c3f19631172e254e3440f258f6689b0ca50f1e60144af69c08-merged.mount: Deactivated successfully.
Nov 29 01:28:33 np0005539508 podman[132192]: 2025-11-29 06:28:33.785512171 +0000 UTC m=+0.710446946 container remove 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:28:33 np0005539508 python3.9[132363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:33 np0005539508 systemd[1]: libpod-conmon-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope: Deactivated successfully.
Nov 29 01:28:34 np0005539508 podman[132409]: 2025-11-29 06:28:33.943344226 +0000 UTC m=+0.022833195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:34 np0005539508 podman[132409]: 2025-11-29 06:28:34.046571479 +0000 UTC m=+0.126060478 container create 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:28:34 np0005539508 systemd[1]: Started libpod-conmon-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope.
Nov 29 01:28:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:34 np0005539508 podman[132409]: 2025-11-29 06:28:34.207649557 +0000 UTC m=+0.287138536 container init 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:34 np0005539508 podman[132409]: 2025-11-29 06:28:34.215402079 +0000 UTC m=+0.294891088 container start 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:28:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:34 np0005539508 podman[132409]: 2025-11-29 06:28:34.263042122 +0000 UTC m=+0.342531131 container attach 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:34 np0005539508 python3.9[132527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397713.3477743-737-150065775857074/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]: {
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:    "1": [
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:        {
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "devices": [
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "/dev/loop3"
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            ],
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "lv_name": "ceph_lv0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "lv_size": "7511998464",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "name": "ceph_lv0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "tags": {
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.cluster_name": "ceph",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.crush_device_class": "",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.encrypted": "0",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.osd_id": "1",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.type": "block",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:                "ceph.vdo": "0"
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            },
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "type": "block",
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:            "vg_name": "ceph_vg0"
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:        }
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]:    ]
Nov 29 01:28:34 np0005539508 quizzical_shamir[132496]: }
Nov 29 01:28:35 np0005539508 systemd[1]: libpod-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope: Deactivated successfully.
Nov 29 01:28:35 np0005539508 podman[132687]: 2025-11-29 06:28:35.075100044 +0000 UTC m=+0.028316601 container died 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:35 np0005539508 python3.9[132686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:28:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.8 total, 600.0 interval#012Cumulative writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s#012Interval WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 29 01:28:35 np0005539508 python3.9[132854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353-merged.mount: Deactivated successfully.
Nov 29 01:28:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:36 np0005539508 podman[132687]: 2025-11-29 06:28:36.292524653 +0000 UTC m=+1.245741210 container remove 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:28:36 np0005539508 systemd[1]: libpod-conmon-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope: Deactivated successfully.
Nov 29 01:28:36 np0005539508 python3.9[132977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397715.3238444-807-21213177959007/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:36 np0005539508 podman[133199]: 2025-11-29 06:28:36.839215633 +0000 UTC m=+0.023885224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:36 np0005539508 podman[133199]: 2025-11-29 06:28:36.968540133 +0000 UTC m=+0.153209714 container create abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:28:37 np0005539508 python3.9[133282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:37 np0005539508 systemd[1]: Started libpod-conmon-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope.
Nov 29 01:28:37 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:37 np0005539508 podman[133199]: 2025-11-29 06:28:37.565452339 +0000 UTC m=+0.750121960 container init abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:28:37 np0005539508 podman[133199]: 2025-11-29 06:28:37.575682282 +0000 UTC m=+0.760351863 container start abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:28:37 np0005539508 unruffled_diffie[133310]: 167 167
Nov 29 01:28:37 np0005539508 systemd[1]: libpod-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope: Deactivated successfully.
Nov 29 01:28:37 np0005539508 podman[133199]: 2025-11-29 06:28:37.696917021 +0000 UTC m=+0.881586682 container attach abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:28:37 np0005539508 podman[133199]: 2025-11-29 06:28:37.697549439 +0000 UTC m=+0.882219040 container died abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:28:37 np0005539508 python3.9[133452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:38 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e6a5b6479dbeb9d8fb38bd0f62918c245310c120c6c1aa6f0302970d634deb46-merged.mount: Deactivated successfully.
Nov 29 01:28:38 np0005539508 python3.9[133576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397717.3795857-878-58498530684549/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:39 np0005539508 podman[133199]: 2025-11-29 06:28:39.049410454 +0000 UTC m=+2.234080035 container remove abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:28:39 np0005539508 systemd[1]: libpod-conmon-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope: Deactivated successfully.
Nov 29 01:28:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:39.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:39 np0005539508 podman[133635]: 2025-11-29 06:28:39.251784054 +0000 UTC m=+0.059615377 container create eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:28:39 np0005539508 podman[133635]: 2025-11-29 06:28:39.217013969 +0000 UTC m=+0.024845322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:28:39 np0005539508 systemd[1]: Started libpod-conmon-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope.
Nov 29 01:28:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:28:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:28:39 np0005539508 python3.9[133758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:39 np0005539508 podman[133635]: 2025-11-29 06:28:39.896871069 +0000 UTC m=+0.704702402 container init eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 01:28:39 np0005539508 podman[133635]: 2025-11-29 06:28:39.909715657 +0000 UTC m=+0.717546990 container start eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:28:39 np0005539508 podman[133635]: 2025-11-29 06:28:39.941706362 +0000 UTC m=+0.749537725 container attach eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:28:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:40 np0005539508 python3.9[133912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:40 np0005539508 quirky_curie[133727]: {
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:        "osd_id": 1,
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:        "type": "bluestore"
Nov 29 01:28:40 np0005539508 quirky_curie[133727]:    }
Nov 29 01:28:40 np0005539508 quirky_curie[133727]: }
Nov 29 01:28:40 np0005539508 systemd[1]: libpod-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope: Deactivated successfully.
Nov 29 01:28:40 np0005539508 podman[133635]: 2025-11-29 06:28:40.761721662 +0000 UTC m=+1.569552995 container died eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:28:40 np0005539508 python3.9[134062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397719.9633133-955-33488274034566/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:41 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d-merged.mount: Deactivated successfully.
Nov 29 01:28:41 np0005539508 podman[133635]: 2025-11-29 06:28:41.150828753 +0000 UTC m=+1.958660086 container remove eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:28:41 np0005539508 systemd[1]: libpod-conmon-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope: Deactivated successfully.
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:28:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:41 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 3b8093ca-856b-4a55-b9dd-4fce9d6c6d95 does not exist
Nov 29 01:28:41 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 1c55fe6d-f5de-42d0-be82-9380ad626aa1 does not exist
Nov 29 01:28:41 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6049e527-278f-4d9e-9490-37827d4ea568 does not exist
Nov 29 01:28:41 np0005539508 python3.9[134218]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:28:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:42 np0005539508 python3.9[134420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:43 np0005539508 python3.9[134594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397721.9604077-1025-75576068781504/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:43 np0005539508 python3.9[134746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:28:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:44.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:44 np0005539508 python3.9[134898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:45 np0005539508 python3.9[135022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397724.1396701-1083-160700555293399/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:47 np0005539508 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 01:28:47 np0005539508 systemd[1]: session-44.scope: Consumed 25.302s CPU time.
Nov 29 01:28:47 np0005539508 systemd-logind[797]: Session 44 logged out. Waiting for processes to exit.
Nov 29 01:28:47 np0005539508 systemd-logind[797]: Removed session 44.
Nov 29 01:28:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:47.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:48.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:49.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:28:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:51.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:28:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:52 np0005539508 systemd-logind[797]: New session 45 of user zuul.
Nov 29 01:28:52 np0005539508 systemd[1]: Started Session 45 of User zuul.
Nov 29 01:28:53 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 01:28:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:53 np0005539508 python3.9[135206]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:28:54
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.control']
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:28:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:28:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:28:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:28:54 np0005539508 python3.9[135358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:55 np0005539508 python3.9[135482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397733.7867298-67-138562747292606/.source.conf _original_basename=ceph.conf follow=False checksum=b678e866ce48244e104f356f74865d3398155ff0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:56 np0005539508 python3.9[135634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:28:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:56 np0005539508 python3.9[135759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397735.403503-67-60973800921858/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d5bc1b1c0617b147c8e3e13846b179249a244079 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.744763) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736744854, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1241, "num_deletes": 253, "total_data_size": 2179703, "memory_usage": 2213976, "flush_reason": "Manual Compaction"}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736755757, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1349515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8466, "largest_seqno": 9706, "table_properties": {"data_size": 1344755, "index_size": 2156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12190, "raw_average_key_size": 20, "raw_value_size": 1334447, "raw_average_value_size": 2246, "num_data_blocks": 99, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397574, "oldest_key_time": 1764397574, "file_creation_time": 1764397736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11030 microseconds, and 5211 cpu microseconds.
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.755807) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1349515 bytes OK
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.755826) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757311) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757328) EVENT_LOG_v1 {"time_micros": 1764397736757323, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2174171, prev total WAL file size 2174171, number of live WAL files 2.
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.758251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323534' seq:0, type:0; will stop at (end)
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1317KB)], [20(10002KB)]
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736758329, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11591771, "oldest_snapshot_seqno": -1}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3883 keys, 9448971 bytes, temperature: kUnknown
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736884640, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9448971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417504, "index_size": 20669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 95323, "raw_average_key_size": 24, "raw_value_size": 9341645, "raw_average_value_size": 2405, "num_data_blocks": 911, "num_entries": 3883, "num_filter_entries": 3883, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.884964) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9448971 bytes
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.886489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.7 rd, 74.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.8 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 4361, records dropped: 478 output_compression: NoCompression
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.886509) EVENT_LOG_v1 {"time_micros": 1764397736886500, "job": 6, "event": "compaction_finished", "compaction_time_micros": 126391, "compaction_time_cpu_micros": 26076, "output_level": 6, "num_output_files": 1, "total_output_size": 9448971, "num_input_records": 4361, "num_output_records": 3883, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736886860, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736888610, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.758120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:56 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:28:57 np0005539508 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 01:28:57 np0005539508 systemd[1]: session-45.scope: Consumed 2.993s CPU time.
Nov 29 01:28:57 np0005539508 systemd-logind[797]: Session 45 logged out. Waiting for processes to exit.
Nov 29 01:28:57 np0005539508 systemd-logind[797]: Removed session 45.
Nov 29 01:28:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:57.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:28:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:28:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:28:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:28:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:59.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:02 np0005539508 systemd-logind[797]: New session 46 of user zuul.
Nov 29 01:29:02 np0005539508 systemd[1]: Started Session 46 of User zuul.
Nov 29 01:29:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:03.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:03 np0005539508 python3.9[135994]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:29:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:05.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:05 np0005539508 python3.9[136151]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:29:06 np0005539508 python3.9[136305]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:29:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:07 np0005539508 python3.9[136455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:29:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:08 np0005539508 python3.9[136610]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 01:29:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:29:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:29:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:13 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 01:29:14 np0005539508 python3.9[136769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:29:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:14.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:15 np0005539508 python3.9[136853]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:29:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:18 np0005539508 python3.9[137008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:29:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:19.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:19 np0005539508 python3[137164]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 01:29:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:20.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:21 np0005539508 python3.9[137319]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:22 np0005539508 python3.9[137471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:22.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:22 np0005539508 python3.9[137549]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:23.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:23 np0005539508 python3.9[137754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:24 np0005539508 python3.9[137832]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5p81rd5q recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:24.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:25 np0005539508 python3.9[137985]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:25.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:25 np0005539508 python3.9[138063]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:26 np0005539508 python3.9[138215]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:29:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:29:28 np0005539508 python3[138369]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 01:29:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:29 np0005539508 python3.9[138521]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:29:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:29:29 np0005539508 python3.9[138649]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397768.4363558-436-75251911659796/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:30 np0005539508 python3.9[138801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:31.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:31 np0005539508 python3.9[138927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397770.1768346-481-42492154048836/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:32 np0005539508 python3.9[139079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:33.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:33 np0005539508 python3.9[139205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397772.0019581-526-199760301193895/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:34 np0005539508 python3.9[139357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:34 np0005539508 python3.9[139482]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397773.782511-571-243522724198203/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:35.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:36 np0005539508 python3.9[139635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:36 np0005539508 python3.9[139760]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397775.5210905-616-267003980042778/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:37.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:37 np0005539508 python3.9[139913]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:38 np0005539508 python3.9[140065]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:39 np0005539508 python3.9[140221]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:40.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:40 np0005539508 python3.9[140373]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:41.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:41 np0005539508 python3.9[140527]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:29:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:42 np0005539508 python3.9[140799]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:43.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:43 np0005539508 python3.9[141021]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:29:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 01:29:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:29:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 01:29:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:29:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:29:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:29:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:29:46 np0005539508 python3.9[141174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:29:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 01:29:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:47.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 01:29:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:49 np0005539508 python3.9[141329]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:49 np0005539508 ovs-vsctl[141330]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 01:29:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:29:50 np0005539508 python3.9[141482]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:29:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:29:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:29:53 np0005539508 python3.9[141639]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:29:53 np0005539508 ovs-vsctl[141640]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 01:29:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:53 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 43d6d224-c1a4-4915-9418-38207f6d58d5 does not exist
Nov 29 01:29:53 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 56a11c13-4450-4733-b2ec-f83b649753b2 does not exist
Nov 29 01:29:53 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 8c672ff4-9a95-45e6-9aae-6688cf9b4e0a does not exist
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:29:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:29:54 np0005539508 python3.9[141838]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:29:54
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.meta']
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:29:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:29:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:29:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:29:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:29:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:54 np0005539508 podman[141953]: 2025-11-29 06:29:54.478566452 +0000 UTC m=+0.050458234 container create b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:29:54 np0005539508 podman[141953]: 2025-11-29 06:29:54.455868799 +0000 UTC m=+0.027760601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:29:54 np0005539508 systemd[1]: Started libpod-conmon-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope.
Nov 29 01:29:54 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:29:55 np0005539508 python3.9[142095]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:29:55 np0005539508 podman[141953]: 2025-11-29 06:29:55.203315191 +0000 UTC m=+0.775206973 container init b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:29:55 np0005539508 podman[141953]: 2025-11-29 06:29:55.21024772 +0000 UTC m=+0.782139502 container start b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:29:55 np0005539508 systemd[1]: libpod-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope: Deactivated successfully.
Nov 29 01:29:55 np0005539508 clever_shannon[142098]: 167 167
Nov 29 01:29:55 np0005539508 conmon[142098]: conmon b6b4d5b8fc6eba933127 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope/container/memory.events
Nov 29 01:29:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:55.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:55 np0005539508 podman[141953]: 2025-11-29 06:29:55.333475098 +0000 UTC m=+0.905366880 container attach b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:29:55 np0005539508 podman[141953]: 2025-11-29 06:29:55.334384775 +0000 UTC m=+0.906276567 container died b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 01:29:55 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6121f801e29c87226b4e0930563eb2700a7c97afbd730d5ec5c2c9abe2dcd983-merged.mount: Deactivated successfully.
Nov 29 01:29:55 np0005539508 podman[141953]: 2025-11-29 06:29:55.502032692 +0000 UTC m=+1.073924474 container remove b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:29:55 np0005539508 systemd[1]: libpod-conmon-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope: Deactivated successfully.
Nov 29 01:29:55 np0005539508 podman[142229]: 2025-11-29 06:29:55.652316349 +0000 UTC m=+0.036629836 container create bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:29:55 np0005539508 systemd[1]: Started libpod-conmon-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope.
Nov 29 01:29:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:29:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:55 np0005539508 podman[142229]: 2025-11-29 06:29:55.637111471 +0000 UTC m=+0.021424978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:29:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:55 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:55 np0005539508 podman[142229]: 2025-11-29 06:29:55.749069715 +0000 UTC m=+0.133383232 container init bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:29:55 np0005539508 podman[142229]: 2025-11-29 06:29:55.76035909 +0000 UTC m=+0.144672597 container start bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:29:55 np0005539508 podman[142229]: 2025-11-29 06:29:55.764922581 +0000 UTC m=+0.149236068 container attach bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:29:55 np0005539508 python3.9[142292]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:56 np0005539508 python3.9[142372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:29:56 np0005539508 admiring_khayyam[142276]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:29:56 np0005539508 admiring_khayyam[142276]: --> relative data size: 1.0
Nov 29 01:29:56 np0005539508 admiring_khayyam[142276]: --> All data devices are unavailable
Nov 29 01:29:56 np0005539508 systemd[1]: libpod-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope: Deactivated successfully.
Nov 29 01:29:56 np0005539508 podman[142229]: 2025-11-29 06:29:56.631437441 +0000 UTC m=+1.015750928 container died bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:29:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:29:57 np0005539508 python3.9[142546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:29:57 np0005539508 python3.9[142624]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:29:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4-merged.mount: Deactivated successfully.
Nov 29 01:29:58 np0005539508 podman[142229]: 2025-11-29 06:29:58.008924644 +0000 UTC m=+2.393238171 container remove bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:29:58 np0005539508 systemd[1]: libpod-conmon-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope: Deactivated successfully.
Nov 29 01:29:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:29:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:58 np0005539508 python3.9[142881]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.608847178 +0000 UTC m=+0.055355965 container create 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:29:58 np0005539508 systemd[1]: Started libpod-conmon-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope.
Nov 29 01:29:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.575823277 +0000 UTC m=+0.022332144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.687622926 +0000 UTC m=+0.134131733 container init 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.696756839 +0000 UTC m=+0.143265616 container start 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.699740655 +0000 UTC m=+0.146249432 container attach 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:29:58 np0005539508 friendly_bohr[142940]: 167 167
Nov 29 01:29:58 np0005539508 systemd[1]: libpod-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope: Deactivated successfully.
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.701701252 +0000 UTC m=+0.148210029 container died 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:29:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5aace8dda0a2d1d71217228568b68648eb98b9d3f3540dac9a061642f1b95136-merged.mount: Deactivated successfully.
Nov 29 01:29:58 np0005539508 podman[142921]: 2025-11-29 06:29:58.944289667 +0000 UTC m=+0.390798444 container remove 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:29:58 np0005539508 systemd[1]: libpod-conmon-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope: Deactivated successfully.
Nov 29 01:29:59 np0005539508 podman[143007]: 2025-11-29 06:29:59.132326011 +0000 UTC m=+0.073359383 container create d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:29:59 np0005539508 podman[143007]: 2025-11-29 06:29:59.085542984 +0000 UTC m=+0.026576366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:29:59 np0005539508 systemd[1]: Started libpod-conmon-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope.
Nov 29 01:29:59 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:29:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:29:59 np0005539508 podman[143007]: 2025-11-29 06:29:59.245583742 +0000 UTC m=+0.186617194 container init d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:29:59 np0005539508 podman[143007]: 2025-11-29 06:29:59.254428447 +0000 UTC m=+0.195461809 container start d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:29:59 np0005539508 podman[143007]: 2025-11-29 06:29:59.26218544 +0000 UTC m=+0.203218842 container attach d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:29:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:29:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:29:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:29:59 np0005539508 python3.9[143133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]: {
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:    "1": [
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:        {
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "devices": [
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "/dev/loop3"
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            ],
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "lv_name": "ceph_lv0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "lv_size": "7511998464",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "name": "ceph_lv0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "tags": {
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.cluster_name": "ceph",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.crush_device_class": "",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.encrypted": "0",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.osd_id": "1",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.type": "block",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:                "ceph.vdo": "0"
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            },
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "type": "block",
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:            "vg_name": "ceph_vg0"
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:        }
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]:    ]
Nov 29 01:30:00 np0005539508 gallant_shockley[143024]: }
Nov 29 01:30:00 np0005539508 systemd[1]: libpod-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope: Deactivated successfully.
Nov 29 01:30:00 np0005539508 podman[143007]: 2025-11-29 06:30:00.105269125 +0000 UTC m=+1.046302497 container died d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:30:00 np0005539508 systemd[1]: var-lib-containers-storage-overlay-394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3-merged.mount: Deactivated successfully.
Nov 29 01:30:00 np0005539508 podman[143007]: 2025-11-29 06:30:00.175990151 +0000 UTC m=+1.117023523 container remove d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:30:00 np0005539508 systemd[1]: libpod-conmon-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope: Deactivated successfully.
Nov 29 01:30:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:00 np0005539508 python3.9[143215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:00.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:00 np0005539508 podman[143483]: 2025-11-29 06:30:00.865500355 +0000 UTC m=+0.064962882 container create 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:30:00 np0005539508 podman[143483]: 2025-11-29 06:30:00.824330639 +0000 UTC m=+0.023793156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:30:00 np0005539508 systemd[1]: Started libpod-conmon-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope.
Nov 29 01:30:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:30:01 np0005539508 python3.9[143532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:01 np0005539508 podman[143483]: 2025-11-29 06:30:01.103328593 +0000 UTC m=+0.302791180 container init 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:30:01 np0005539508 podman[143483]: 2025-11-29 06:30:01.11157306 +0000 UTC m=+0.311035597 container start 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:30:01 np0005539508 confident_rosalind[143535]: 167 167
Nov 29 01:30:01 np0005539508 systemd[1]: libpod-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope: Deactivated successfully.
Nov 29 01:30:01 np0005539508 podman[143483]: 2025-11-29 06:30:01.181945026 +0000 UTC m=+0.381407523 container attach 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 01:30:01 np0005539508 podman[143483]: 2025-11-29 06:30:01.18312777 +0000 UTC m=+0.382590297 container died 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:30:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay-bf0e4ebdb006d716dfee6b68ee2c8f5e57a3f51f5e49348803512bc104c3c967-merged.mount: Deactivated successfully.
Nov 29 01:30:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:01 np0005539508 podman[143483]: 2025-11-29 06:30:01.323407019 +0000 UTC m=+0.522869516 container remove 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 01:30:01 np0005539508 systemd[1]: libpod-conmon-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope: Deactivated successfully.
Nov 29 01:30:01 np0005539508 podman[143637]: 2025-11-29 06:30:01.511863856 +0000 UTC m=+0.048451136 container create e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:30:01 np0005539508 systemd[1]: Started libpod-conmon-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope.
Nov 29 01:30:01 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:30:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:30:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:30:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:30:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:30:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:30:01 np0005539508 podman[143637]: 2025-11-29 06:30:01.49117954 +0000 UTC m=+0.027766820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:30:01 np0005539508 podman[143637]: 2025-11-29 06:30:01.593104775 +0000 UTC m=+0.129692065 container init e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:30:01 np0005539508 podman[143637]: 2025-11-29 06:30:01.6019602 +0000 UTC m=+0.138547460 container start e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:30:01 np0005539508 podman[143637]: 2025-11-29 06:30:01.606181812 +0000 UTC m=+0.142769132 container attach e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:30:01 np0005539508 python3.9[143631]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]: {
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:        "osd_id": 1,
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:        "type": "bluestore"
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]:    }
Nov 29 01:30:02 np0005539508 suspicious_torvalds[143655]: }
Nov 29 01:30:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:02.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:02 np0005539508 systemd[1]: libpod-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope: Deactivated successfully.
Nov 29 01:30:02 np0005539508 podman[143637]: 2025-11-29 06:30:02.43026794 +0000 UTC m=+0.966855200 container died e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 01:30:02 np0005539508 python3.9[143811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:30:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:02 np0005539508 systemd[1]: Reloading.
Nov 29 01:30:02 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:02 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:30:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:03 np0005539508 python3.9[144078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:03 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c-merged.mount: Deactivated successfully.
Nov 29 01:30:04 np0005539508 python3.9[144157]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:04.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:04 np0005539508 podman[143637]: 2025-11-29 06:30:04.501778856 +0000 UTC m=+3.038366126 container remove e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:30:04 np0005539508 systemd[1]: libpod-conmon-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope: Deactivated successfully.
Nov 29 01:30:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:30:05 np0005539508 python3.9[144310]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:05 np0005539508 python3.9[144388]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:30:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:30:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:30:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 2c3e7a83-4cfd-4cbc-915e-4b455314c20a does not exist
Nov 29 01:30:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 04b0d6b0-f620-42c3-8516-3f41fd175b58 does not exist
Nov 29 01:30:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev ce673716-89ff-4575-951f-fea6e049d8ce does not exist
Nov 29 01:30:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:06 np0005539508 python3.9[144540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:30:06 np0005539508 systemd[1]: Reloading.
Nov 29 01:30:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:30:06 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:30:06 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:30:06 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:06.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:06 np0005539508 systemd[1]: Starting Create netns directory...
Nov 29 01:30:06 np0005539508 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 01:30:06 np0005539508 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 01:30:06 np0005539508 systemd[1]: Finished Create netns directory.
Nov 29 01:30:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:07 np0005539508 python3.9[144785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:08.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:08 np0005539508 python3.9[144937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:09 np0005539508 python3.9[145061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397808.1351464-1369-68044897903705/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:30:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:30:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:10 np0005539508 python3.9[145213]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:10.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:11 np0005539508 python3.9[145366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:11 np0005539508 python3.9[145489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397810.6240585-1444-191987691393897/.source.json _original_basename=.yqpy235r follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:12.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:12 np0005539508 python3.9[145641]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:30:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:30:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:30:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:13.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:30:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:14.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:15.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:15 np0005539508 python3.9[146072]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 01:30:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:16 np0005539508 python3.9[146224]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 01:30:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:17 np0005539508 python3.9[146377]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 01:30:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:19 np0005539508 python3[146558]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 01:30:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:22.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:24.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:30:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:30:29 np0005539508 podman[146572]: 2025-11-29 06:30:29.678281414 +0000 UTC m=+9.766168840 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 01:30:29 np0005539508 podman[146747]: 2025-11-29 06:30:29.811910665 +0000 UTC m=+0.042226957 container create b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:30:29 np0005539508 podman[146747]: 2025-11-29 06:30:29.790117953 +0000 UTC m=+0.020434245 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 01:30:29 np0005539508 python3[146558]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 01:30:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:30 np0005539508 python3.9[146935]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:30:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:31 np0005539508 python3.9[147090]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:32 np0005539508 python3.9[147166]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:30:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 01:30:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 01:30:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:37 np0005539508 python3.9[147318]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397832.6898708-1708-64719301600252/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:37 np0005539508 python3.9[147398]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:30:37 np0005539508 systemd[1]: Reloading.
Nov 29 01:30:37 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:37 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:30:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:38 np0005539508 python3.9[147512]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:30:38 np0005539508 systemd[1]: Reloading.
Nov 29 01:30:38 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:38 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:30:39 np0005539508 systemd[1]: Starting ovn_controller container...
Nov 29 01:30:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:30:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8baf2007c97915a8b8de2e1107524df74412b4e46fb38e4f4437d65da64f4c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 01:30:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:39 np0005539508 systemd[1]: Started /usr/bin/podman healthcheck run b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7.
Nov 29 01:30:39 np0005539508 podman[147554]: 2025-11-29 06:30:39.469276729 +0000 UTC m=+0.440855806 container init b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: + sudo -E kolla_set_configs
Nov 29 01:30:39 np0005539508 podman[147554]: 2025-11-29 06:30:39.512082872 +0000 UTC m=+0.483661859 container start b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:30:39 np0005539508 systemd[1]: Created slice User Slice of UID 0.
Nov 29 01:30:39 np0005539508 edpm-start-podman-container[147554]: ovn_controller
Nov 29 01:30:39 np0005539508 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 01:30:39 np0005539508 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 01:30:39 np0005539508 systemd[1]: Starting User Manager for UID 0...
Nov 29 01:30:39 np0005539508 edpm-start-podman-container[147553]: Creating additional drop-in dependency for "ovn_controller" (b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7)
Nov 29 01:30:39 np0005539508 podman[147575]: 2025-11-29 06:30:39.624839276 +0000 UTC m=+0.094258664 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 01:30:39 np0005539508 systemd[1]: Reloading.
Nov 29 01:30:39 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:30:39 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:30:39 np0005539508 systemd[147597]: Queued start job for default target Main User Target.
Nov 29 01:30:39 np0005539508 systemd[147597]: Created slice User Application Slice.
Nov 29 01:30:39 np0005539508 systemd[147597]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 01:30:39 np0005539508 systemd[147597]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:30:39 np0005539508 systemd[147597]: Reached target Paths.
Nov 29 01:30:39 np0005539508 systemd[147597]: Reached target Timers.
Nov 29 01:30:39 np0005539508 systemd[147597]: Starting D-Bus User Message Bus Socket...
Nov 29 01:30:39 np0005539508 systemd[147597]: Starting Create User's Volatile Files and Directories...
Nov 29 01:30:39 np0005539508 systemd[147597]: Finished Create User's Volatile Files and Directories.
Nov 29 01:30:39 np0005539508 systemd[147597]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:30:39 np0005539508 systemd[147597]: Reached target Sockets.
Nov 29 01:30:39 np0005539508 systemd[147597]: Reached target Basic System.
Nov 29 01:30:39 np0005539508 systemd[147597]: Reached target Main User Target.
Nov 29 01:30:39 np0005539508 systemd[147597]: Startup finished in 148ms.
Nov 29 01:30:39 np0005539508 systemd[1]: Started User Manager for UID 0.
Nov 29 01:30:39 np0005539508 systemd[1]: Started ovn_controller container.
Nov 29 01:30:39 np0005539508 systemd[1]: b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7-5bed9f1a8190501d.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 01:30:39 np0005539508 systemd[1]: b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7-5bed9f1a8190501d.service: Failed with result 'exit-code'.
Nov 29 01:30:39 np0005539508 systemd[1]: Started Session c1 of User root.
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: INFO:__main__:Validating config file
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: INFO:__main__:Writing out command to execute
Nov 29 01:30:39 np0005539508 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: ++ cat /run_command
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: + ARGS=
Nov 29 01:30:39 np0005539508 ovn_controller[147569]: + sudo kolla_copy_cacerts
Nov 29 01:30:40 np0005539508 systemd[1]: Started Session c2 of User root.
Nov 29 01:30:40 np0005539508 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: + [[ ! -n '' ]]
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: + . kolla_extend_start
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: + umask 0022
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0530] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0539] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0552] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0559] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0564] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:30:40 np0005539508 kernel: br-int: entered promiscuous mode
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0791] manager: (ovn-2fa832-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0799] manager: (ovn-e15f55-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0806] manager: (ovn-fa6f2e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 01:30:40 np0005539508 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 01:30:40 np0005539508 systemd-udevd[147723]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:30:40 np0005539508 systemd-udevd[147725]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0964] device (genev_sys_6081): carrier: link connected
Nov 29 01:30:40 np0005539508 NetworkManager[49224]: <info>  [1764397840.0966] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 01:30:40 np0005539508 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 01:30:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:40.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:40 np0005539508 python3.9[147835]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:40 np0005539508 ovs-vsctl[147836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 01:30:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:41 np0005539508 python3.9[147989]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:41 np0005539508 ovs-vsctl[147991]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 01:30:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:42 np0005539508 python3.9[148146]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:42 np0005539508 ovs-vsctl[148147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 01:30:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:43 np0005539508 systemd-logind[797]: Session 46 logged out. Waiting for processes to exit.
Nov 29 01:30:43 np0005539508 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 01:30:43 np0005539508 systemd[1]: session-46.scope: Consumed 1min 1.999s CPU time.
Nov 29 01:30:43 np0005539508 systemd-logind[797]: Removed session 46.
Nov 29 01:30:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:47.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:48.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:48 np0005539508 systemd-logind[797]: New session 48 of user zuul.
Nov 29 01:30:48 np0005539508 systemd[1]: Started Session 48 of User zuul.
Nov 29 01:30:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:50 np0005539508 systemd[1]: Stopping User Manager for UID 0...
Nov 29 01:30:50 np0005539508 systemd[147597]: Activating special unit Exit the Session...
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped target Main User Target.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped target Basic System.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped target Paths.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped target Sockets.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped target Timers.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 01:30:50 np0005539508 systemd[147597]: Closed D-Bus User Message Bus Socket.
Nov 29 01:30:50 np0005539508 systemd[147597]: Stopped Create User's Volatile Files and Directories.
Nov 29 01:30:50 np0005539508 systemd[147597]: Removed slice User Application Slice.
Nov 29 01:30:50 np0005539508 systemd[147597]: Reached target Shutdown.
Nov 29 01:30:50 np0005539508 systemd[147597]: Finished Exit the Session.
Nov 29 01:30:50 np0005539508 systemd[147597]: Reached target Exit the Session.
Nov 29 01:30:50 np0005539508 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 01:30:50 np0005539508 systemd[1]: Stopped User Manager for UID 0.
Nov 29 01:30:50 np0005539508 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 01:30:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:50 np0005539508 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 01:30:50 np0005539508 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 01:30:50 np0005539508 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 01:30:50 np0005539508 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 01:30:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:51 np0005539508 python3.9[148386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:30:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:52 np0005539508 python3.9[148545]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:30:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:30:53 np0005539508 python3.9[148698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:30:54
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:30:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:54 np0005539508 python3.9[148850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:30:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:30:55 np0005539508 python3.9[149003]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:55 np0005539508 python3.9[149157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:30:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:56 np0005539508 python3.9[149307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:30:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:57.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:57 np0005539508 python3.9[149460]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 01:30:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:30:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:30:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:59 np0005539508 python3.9[149612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:30:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:30:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:30:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:59.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:30:59 np0005539508 python3.9[149733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397858.4473093-223-256574200726365/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:00 np0005539508 python3.9[149883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:00.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:00 np0005539508 python3.9[150004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397859.9621308-268-100609296722883/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:01 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 01:31:01 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 01:31:01 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 01:31:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:02 np0005539508 python3.9[150157]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:31:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 01:31:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:03 np0005539508 python3.9[150242]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:31:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:03.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 01:31:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:04.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:05.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:05 np0005539508 python3.9[150446]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:31:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 01:31:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:31:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:31:06 np0005539508 python3.9[150599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:07 np0005539508 python3.9[150721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397866.1312451-379-143883420357389/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:07.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:07 np0005539508 python3.9[151003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Nov 29 01:31:08 np0005539508 python3.9[151124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397867.4019048-379-45360342350863/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:08.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:31:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:31:09 np0005539508 python3.9[151275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:31:10 np0005539508 ovn_controller[147569]: 2025-11-29T06:31:10Z|00025|memory|INFO|16384 kB peak resident set size after 30.1 seconds
Nov 29 01:31:10 np0005539508 ovn_controller[147569]: 2025-11-29T06:31:10Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 01:31:10 np0005539508 podman[151276]: 2025-11-29 06:31:10.176699763 +0000 UTC m=+0.142002783 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 90de5ce7-7af4-4422-9b1f-ec8b6115f9af does not exist
Nov 29 01:31:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 86f4d47d-5d09-4c4b-ac22-c0ff46ab5c73 does not exist
Nov 29 01:31:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 0a459b10-fd8c-4dcd-a4fb-61d5b119accf does not exist
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:31:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Nov 29 01:31:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:10 np0005539508 python3.9[151468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397869.5081654-511-67590668071521/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:10 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:31:10 np0005539508 podman[151631]: 2025-11-29 06:31:10.78668673 +0000 UTC m=+0.021728511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:11 np0005539508 podman[151631]: 2025-11-29 06:31:11.102089141 +0000 UTC m=+0.337130932 container create 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:31:11 np0005539508 python3.9[151726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:31:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:31:11 np0005539508 systemd[1]: Started libpod-conmon-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope.
Nov 29 01:31:11 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:11 np0005539508 podman[151631]: 2025-11-29 06:31:11.612456105 +0000 UTC m=+0.847497886 container init 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:31:11 np0005539508 podman[151631]: 2025-11-29 06:31:11.620466016 +0000 UTC m=+0.855507777 container start 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:31:11 np0005539508 happy_agnesi[151776]: 167 167
Nov 29 01:31:11 np0005539508 systemd[1]: libpod-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope: Deactivated successfully.
Nov 29 01:31:11 np0005539508 podman[151631]: 2025-11-29 06:31:11.692416553 +0000 UTC m=+0.927458324 container attach 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:31:11 np0005539508 podman[151631]: 2025-11-29 06:31:11.693531874 +0000 UTC m=+0.928573625 container died 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:31:11 np0005539508 python3.9[151864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397870.7100148-511-24536040884562/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:12 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8f9db095f7d449b93b5322809ea8153c7d2b5937d13da36a1084930d1d373739-merged.mount: Deactivated successfully.
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Nov 29 01:31:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:12 np0005539508 python3.9[152015]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:31:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:31:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:13 np0005539508 python3.9[152170]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Nov 29 01:31:14 np0005539508 python3.9[152322]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:14 np0005539508 python3.9[152400]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:15 np0005539508 python3.9[152553]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 01:31:16 np0005539508 python3.9[152633]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:31:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:16.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:31:17 np0005539508 python3.9[152786]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:31:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:31:18 np0005539508 python3.9[152938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 01:31:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:18 np0005539508 python3.9[153016]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:19 np0005539508 python3.9[153169]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:20 np0005539508 python3.9[153247]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:20 np0005539508 podman[151631]: 2025-11-29 06:31:20.231347801 +0000 UTC m=+9.466389592 container remove 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:31:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 01:31:20 np0005539508 systemd[1]: libpod-conmon-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope: Deactivated successfully.
Nov 29 01:31:20 np0005539508 podman[153280]: 2025-11-29 06:31:20.405832882 +0000 UTC m=+0.029050000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:20 np0005539508 podman[153280]: 2025-11-29 06:31:20.990522292 +0000 UTC m=+0.613739400 container create 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:31:21 np0005539508 python3.9[153421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:31:21 np0005539508 systemd[1]: Reloading.
Nov 29 01:31:21 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:31:21 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:31:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:21 np0005539508 systemd[1]: Started libpod-conmon-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope.
Nov 29 01:31:21 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:21 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:21 np0005539508 podman[153280]: 2025-11-29 06:31:21.564571707 +0000 UTC m=+1.187788855 container init 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:31:21 np0005539508 podman[153280]: 2025-11-29 06:31:21.576708144 +0000 UTC m=+1.199925232 container start 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:31:21 np0005539508 podman[153280]: 2025-11-29 06:31:21.721306481 +0000 UTC m=+1.344523589 container attach 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:31:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 01:31:22 np0005539508 python3.9[153620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:22 np0005539508 stoic_colden[153461]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:31:22 np0005539508 stoic_colden[153461]: --> relative data size: 1.0
Nov 29 01:31:22 np0005539508 stoic_colden[153461]: --> All data devices are unavailable
Nov 29 01:31:22 np0005539508 systemd[1]: libpod-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope: Deactivated successfully.
Nov 29 01:31:22 np0005539508 podman[153280]: 2025-11-29 06:31:22.478645398 +0000 UTC m=+2.101862536 container died 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:31:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:23 np0005539508 python3.9[153718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay-ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516-merged.mount: Deactivated successfully.
Nov 29 01:31:24 np0005539508 python3.9[153872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 01:31:24 np0005539508 python3.9[154000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:24 np0005539508 podman[153280]: 2025-11-29 06:31:24.597792198 +0000 UTC m=+4.221009336 container remove 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:31:24 np0005539508 systemd[1]: libpod-conmon-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope: Deactivated successfully.
Nov 29 01:31:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:25 np0005539508 python3.9[154253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:31:25 np0005539508 systemd[1]: Reloading.
Nov 29 01:31:25 np0005539508 podman[154296]: 2025-11-29 06:31:25.327707614 +0000 UTC m=+0.090780863 container create 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:31:25 np0005539508 podman[154296]: 2025-11-29 06:31:25.270395648 +0000 UTC m=+0.033468927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:25 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:31:25 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:31:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:25 np0005539508 systemd[1]: Started libpod-conmon-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope.
Nov 29 01:31:25 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:25 np0005539508 systemd[1]: Starting Create netns directory...
Nov 29 01:31:25 np0005539508 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 01:31:25 np0005539508 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 01:31:25 np0005539508 systemd[1]: Finished Create netns directory.
Nov 29 01:31:25 np0005539508 podman[154296]: 2025-11-29 06:31:25.886008079 +0000 UTC m=+0.649081358 container init 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 01:31:25 np0005539508 podman[154296]: 2025-11-29 06:31:25.897579339 +0000 UTC m=+0.660652588 container start 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:31:25 np0005539508 upbeat_jang[154349]: 167 167
Nov 29 01:31:25 np0005539508 systemd[1]: libpod-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope: Deactivated successfully.
Nov 29 01:31:26 np0005539508 podman[154296]: 2025-11-29 06:31:26.069491197 +0000 UTC m=+0.832564536 container attach 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:31:26 np0005539508 podman[154296]: 2025-11-29 06:31:26.071960887 +0000 UTC m=+0.835034256 container died 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:31:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-60d5b7ebad227636a4a0fee740c86fbc069291443b571cc160d105670453394c-merged.mount: Deactivated successfully.
Nov 29 01:31:26 np0005539508 podman[154296]: 2025-11-29 06:31:26.410690626 +0000 UTC m=+1.173763875 container remove 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:31:26 np0005539508 systemd[1]: libpod-conmon-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope: Deactivated successfully.
Nov 29 01:31:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:26 np0005539508 podman[154404]: 2025-11-29 06:31:26.548630764 +0000 UTC m=+0.027587689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:26 np0005539508 podman[154404]: 2025-11-29 06:31:26.860547527 +0000 UTC m=+0.339504422 container create 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:31:26 np0005539508 systemd[1]: Started libpod-conmon-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope.
Nov 29 01:31:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:26 np0005539508 podman[154404]: 2025-11-29 06:31:26.975050436 +0000 UTC m=+0.454007361 container init 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:31:26 np0005539508 podman[154404]: 2025-11-29 06:31:26.985738431 +0000 UTC m=+0.464695326 container start 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:31:26 np0005539508 podman[154404]: 2025-11-29 06:31:26.991154125 +0000 UTC m=+0.470111020 container attach 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:31:27 np0005539508 python3.9[154553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]: {
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:    "1": [
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:        {
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "devices": [
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "/dev/loop3"
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            ],
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "lv_name": "ceph_lv0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "lv_size": "7511998464",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "name": "ceph_lv0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "tags": {
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.cluster_name": "ceph",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.crush_device_class": "",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.encrypted": "0",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.osd_id": "1",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.type": "block",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:                "ceph.vdo": "0"
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            },
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "type": "block",
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:            "vg_name": "ceph_vg0"
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:        }
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]:    ]
Nov 29 01:31:27 np0005539508 intelligent_ganguly[154520]: }
Nov 29 01:31:27 np0005539508 systemd[1]: libpod-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope: Deactivated successfully.
Nov 29 01:31:27 np0005539508 podman[154404]: 2025-11-29 06:31:27.748190284 +0000 UTC m=+1.227147179 container died 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:31:28 np0005539508 python3.9[154711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:31:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:28.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:31:28 np0005539508 python3.9[154847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397887.394108-964-79036037087272/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:31:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:31:29 np0005539508 python3.9[155001]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:31:29 np0005539508 systemd[1]: var-lib-containers-storage-overlay-aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310-merged.mount: Deactivated successfully.
Nov 29 01:31:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:30.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:30 np0005539508 python3.9[155155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.200151) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891200201, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1245, "num_deletes": 252, "total_data_size": 2134309, "memory_usage": 2167040, "flush_reason": "Manual Compaction"}
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 01:31:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:31 np0005539508 python3.9[155279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397889.9975817-1039-22560831867865/.source.json _original_basename=.onaplbc5 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891843278, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 2087601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9707, "largest_seqno": 10951, "table_properties": {"data_size": 2081808, "index_size": 3124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12444, "raw_average_key_size": 19, "raw_value_size": 2069865, "raw_average_value_size": 3254, "num_data_blocks": 144, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397736, "oldest_key_time": 1764397736, "file_creation_time": 1764397891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 643164 microseconds, and 5130 cpu microseconds.
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.843317) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 2087601 bytes OK
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.843334) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899283) EVENT_LOG_v1 {"time_micros": 1764397891899268, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2128822, prev total WAL file size 2160595, number of live WAL files 2.
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.900677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(2038KB)], [23(9227KB)]
Nov 29 01:31:31 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891900749, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 11536572, "oldest_snapshot_seqno": -1}
Nov 29 01:31:31 np0005539508 podman[154404]: 2025-11-29 06:31:31.928022495 +0000 UTC m=+5.406979380 container remove 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:31:31 np0005539508 systemd[1]: libpod-conmon-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope: Deactivated successfully.
Nov 29 01:31:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:32 np0005539508 podman[155572]: 2025-11-29 06:31:32.566029647 +0000 UTC m=+0.037649896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:32 np0005539508 python3.9[155506]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3999 keys, 9547129 bytes, temperature: kUnknown
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893301873, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 9547129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9515660, "index_size": 20351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98538, "raw_average_key_size": 24, "raw_value_size": 9438540, "raw_average_value_size": 2360, "num_data_blocks": 889, "num_entries": 3999, "num_filter_entries": 3999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.302304) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 9547129 bytes
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.407715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.2 rd, 6.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 4519, records dropped: 520 output_compression: NoCompression
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.407760) EVENT_LOG_v1 {"time_micros": 1764397893407741, "job": 8, "event": "compaction_finished", "compaction_time_micros": 1401367, "compaction_time_cpu_micros": 36236, "output_level": 6, "num_output_files": 1, "total_output_size": 9547129, "num_input_records": 4519, "num_output_records": 3999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893408274, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893409691, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.900394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:31:33 np0005539508 podman[155572]: 2025-11-29 06:31:33.409805151 +0000 UTC m=+0.881425370 container create 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:31:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:33.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:33 np0005539508 systemd[1]: Started libpod-conmon-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope.
Nov 29 01:31:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:34 np0005539508 podman[155572]: 2025-11-29 06:31:34.093855298 +0000 UTC m=+1.565475547 container init 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:31:34 np0005539508 podman[155572]: 2025-11-29 06:31:34.103489553 +0000 UTC m=+1.575109772 container start 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:31:34 np0005539508 heuristic_khorana[155791]: 167 167
Nov 29 01:31:34 np0005539508 systemd[1]: libpod-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope: Deactivated successfully.
Nov 29 01:31:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:34 np0005539508 podman[155572]: 2025-11-29 06:31:34.50309617 +0000 UTC m=+1.974716409 container attach 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:31:34 np0005539508 podman[155572]: 2025-11-29 06:31:34.504010116 +0000 UTC m=+1.975630345 container died 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:31:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:34.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:35 np0005539508 python3.9[156032]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 01:31:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:37 np0005539508 python3.9[156185]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 01:31:37 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1fce32a3a945796542af4a5f507bb13a77bc014e453508e0ac4ef6e7629cabc8-merged.mount: Deactivated successfully.
Nov 29 01:31:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:38 np0005539508 python3.9[156338]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 01:31:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:38 np0005539508 podman[155572]: 2025-11-29 06:31:38.346368943 +0000 UTC m=+5.817989162 container remove 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:31:38 np0005539508 systemd[1]: libpod-conmon-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope: Deactivated successfully.
Nov 29 01:31:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:38 np0005539508 podman[156372]: 2025-11-29 06:31:38.496691895 +0000 UTC m=+0.025842399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:31:38 np0005539508 podman[156372]: 2025-11-29 06:31:38.672160464 +0000 UTC m=+0.201310938 container create 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:31:38 np0005539508 systemd[1]: Started libpod-conmon-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope.
Nov 29 01:31:38 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:31:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:38 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:31:39 np0005539508 podman[156372]: 2025-11-29 06:31:39.296076993 +0000 UTC m=+0.825227487 container init 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:31:39 np0005539508 podman[156372]: 2025-11-29 06:31:39.30435903 +0000 UTC m=+0.833509504 container start 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:31:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]: {
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:        "osd_id": 1,
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:        "type": "bluestore"
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]:    }
Nov 29 01:31:40 np0005539508 epic_dijkstra[156412]: }
Nov 29 01:31:40 np0005539508 systemd[1]: libpod-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope: Deactivated successfully.
Nov 29 01:31:40 np0005539508 python3[156548]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 01:31:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:40 np0005539508 podman[156372]: 2025-11-29 06:31:40.523685844 +0000 UTC m=+2.052836358 container attach 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:31:40 np0005539508 podman[156372]: 2025-11-29 06:31:40.524646381 +0000 UTC m=+2.053796885 container died 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:31:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:42 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5-merged.mount: Deactivated successfully.
Nov 29 01:31:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:43 np0005539508 podman[156372]: 2025-11-29 06:31:43.332010196 +0000 UTC m=+4.861160670 container remove 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 01:31:43 np0005539508 systemd[1]: libpod-conmon-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope: Deactivated successfully.
Nov 29 01:31:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:31:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:43 np0005539508 podman[156588]: 2025-11-29 06:31:43.535736242 +0000 UTC m=+2.491570222 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:31:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:31:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:45 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7612b81e-1244-4234-b8e7-e0ce3293afcb does not exist
Nov 29 01:31:45 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9229c554-b976-443b-8b0d-52d2bfe95898 does not exist
Nov 29 01:31:45 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 837caff3-9e84-45ee-af54-03e8e27bb7f6 does not exist
Nov 29 01:31:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:31:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:31:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:49.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:31:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:31:54
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes']
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:31:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:31:55 np0005539508 ceph-mgr[74948]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1221624088
Nov 29 01:31:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:55.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:31:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:31:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:31:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:31:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:31:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:31:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:59.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:01.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:06 np0005539508 podman[156602]: 2025-11-29 06:32:06.767637959 +0000 UTC m=+25.200862531 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 01:32:06 np0005539508 podman[156913]: 2025-11-29 06:32:06.886737429 +0000 UTC m=+0.023188293 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 01:32:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:08 np0005539508 podman[156913]: 2025-11-29 06:32:08.00396598 +0000 UTC m=+1.140416794 container create 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 01:32:08 np0005539508 python3[156548]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 01:32:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:08 np0005539508 python3.9[157106]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:32:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:09 np0005539508 python3.9[157261]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:10 np0005539508 python3.9[157337]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:32:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:10 np0005539508 python3.9[157488]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397930.286957-1303-58847719296659/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:11.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:11 np0005539508 python3.9[157565]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:32:11 np0005539508 systemd[1]: Reloading.
Nov 29 01:32:11 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:32:11 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:32:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:32:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:32:13 np0005539508 python3.9[157677]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:13 np0005539508 systemd[1]: Reloading.
Nov 29 01:32:13 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:32:13 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:32:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:13 np0005539508 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 01:32:14 np0005539508 podman[157716]: 2025-11-29 06:32:14.040254521 +0000 UTC m=+0.331037141 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 01:32:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:32:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da201fd80aede9d0b94bcf8a7b6f117abc11be9268ffa9452262c34d0c0a2f68/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da201fd80aede9d0b94bcf8a7b6f117abc11be9268ffa9452262c34d0c0a2f68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:14 np0005539508 systemd[1]: Started /usr/bin/podman healthcheck run 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000.
Nov 29 01:32:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:14 np0005539508 podman[157720]: 2025-11-29 06:32:14.971443261 +0000 UTC m=+1.243345352 container init 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:32:14 np0005539508 ovn_metadata_agent[157760]: + sudo -E kolla_set_configs
Nov 29 01:32:15 np0005539508 podman[157720]: 2025-11-29 06:32:15.022735845 +0000 UTC m=+1.294637886 container start 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Validating config file
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Copying service configuration files
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Writing out command to execute
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: ++ cat /run_command
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + CMD=neutron-ovn-metadata-agent
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + ARGS=
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + sudo kolla_copy_cacerts
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + [[ ! -n '' ]]
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + . kolla_extend_start
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + umask 0022
Nov 29 01:32:15 np0005539508 ovn_metadata_agent[157760]: + exec neutron-ovn-metadata-agent
Nov 29 01:32:15 np0005539508 edpm-start-podman-container[157720]: ovn_metadata_agent
Nov 29 01:32:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:15 np0005539508 edpm-start-podman-container[157719]: Creating additional drop-in dependency for "ovn_metadata_agent" (81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000)
Nov 29 01:32:15 np0005539508 systemd[1]: Reloading.
Nov 29 01:32:15 np0005539508 podman[157769]: 2025-11-29 06:32:15.546079674 +0000 UTC m=+0.505772399 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:32:15 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:32:15 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:32:15 np0005539508 systemd[1]: Started ovn_metadata_agent container.
Nov 29 01:32:16 np0005539508 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 01:32:16 np0005539508 systemd[1]: session-48.scope: Consumed 58.755s CPU time.
Nov 29 01:32:16 np0005539508 systemd-logind[797]: Session 48 logged out. Waiting for processes to exit.
Nov 29 01:32:16 np0005539508 systemd-logind[797]: Removed session 48.
Nov 29 01:32:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.161 157767 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.208 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.224 157767 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 93db784b-4e42-404a-b548-49ad165fd917 (UUID: 93db784b-4e42-404a-b548-49ad165fd917) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.246 157767 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.250 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.256 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.263 157767 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '93db784b-4e42-404a-b548-49ad165fd917'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe9f772b8b0>], external_ids={}, name=93db784b-4e42-404a-b548-49ad165fd917, nb_cfg_timestamp=1764397848072, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe9f7719f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.265 157767 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.265 157767 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.269 157767 DEBUG oslo_service.service [-] Started child 157875 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.273 157767 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzxveuc71/privsep.sock']#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.273 157875 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-954079'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.315 157875 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.316 157875 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.316 157875 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.321 157875 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.331 157875 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 01:32:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.341 157875 INFO eventlet.wsgi.server [-] (157875) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 01:32:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:17 np0005539508 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.003 157767 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.004 157767 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzxveuc71/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.844 157880 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.851 157880 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.853 157880 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.854 157880 INFO oslo.privsep.daemon [-] privsep daemon running as pid 157880#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.007 157880 DEBUG oslo.privsep.daemon [-] privsep: reply[2da9e522-2821-48ce-a624-c8ed0481daac]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 01:32:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.530 157880 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.530 157880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:32:18 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.531 157880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:32:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.112 157880 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fe6125-13d3-4b57-b1c7-701ce4d0cd7a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.114 157767 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=93db784b-4e42-404a-b548-49ad165fd917, column=external_ids, values=({'neutron:ovn-metadata-id': '8bce076b-c275-5b6a-8cac-f4510edf00a8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.127 157767 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=93db784b-4e42-404a-b548-49ad165fd917, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.133 157767 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.133 157767 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:32:19 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.194 157767 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 01:32:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:22 np0005539508 auditd[707]: Audit daemon rotating log files
Nov 29 01:32:22 np0005539508 systemd-logind[797]: New session 49 of user zuul.
Nov 29 01:32:22 np0005539508 systemd[1]: Started Session 49 of User zuul.
Nov 29 01:32:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:23 np0005539508 python3.9[158043]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:32:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:23.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:24.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:25.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:26 np0005539508 python3.9[158199]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:32:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:28 np0005539508 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 01:32:29 np0005539508 python3.9[158416]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:32:29 np0005539508 systemd[1]: Reloading.
Nov 29 01:32:29 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:32:29 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:32:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:32:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:32:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:31.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:32 np0005539508 python3.9[158611]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:32:32 np0005539508 network[158629]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:32:32 np0005539508 network[158630]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:32:32 np0005539508 network[158631]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:32:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:32 np0005539508 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 01:32:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(23) init, last seen epoch 23, mid-election, bumping
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:32:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: paxos.0).electionLogic(27) init, last seen epoch 27, mid-election, bumping
Nov 29 01:32:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:32:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 15m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 01:32:36 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:32:37 np0005539508 python3.9[158895]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:38 np0005539508 python3.9[159049]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 01:32:38 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:32:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:39 np0005539508 python3.9[159202]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:40 np0005539508 python3.9[159356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:41 np0005539508 python3.9[159511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:41 np0005539508 python3.9[159665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:43 np0005539508 python3.9[159818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:32:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:44 np0005539508 podman[159944]: 2025-11-29 06:32:44.591056378 +0000 UTC m=+0.162005068 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 01:32:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:45 np0005539508 python3.9[159988]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:45 np0005539508 podman[160150]: 2025-11-29 06:32:45.991673321 +0000 UTC m=+0.065414769 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:46 np0005539508 python3.9[160151]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:46.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:46 np0005539508 python3.9[160486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:32:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:32:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:48 np0005539508 python3.9[160653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:48.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:49 np0005539508 python3.9[160807]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:49.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:50.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:50 np0005539508 python3.9[160959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:51 np0005539508 python3.9[161114]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:52 np0005539508 python3.9[161266]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:53 np0005539508 python3.9[161421]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:32:54
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:32:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:54 np0005539508 python3.9[161574]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:55 np0005539508 python3.9[161727]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:32:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:32:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:32:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev eaca5981-b568-4779-8f07-fa20e06487ca does not exist
Nov 29 01:32:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7e53b879-a216-47f1-a5eb-1730266c0125 does not exist
Nov 29 01:32:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7263215f-e632-468a-a8bd-8f23d353ca3b does not exist
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:32:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:32:56 np0005539508 python3.9[161882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:56.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:56 np0005539508 podman[162121]: 2025-11-29 06:32:56.603847019 +0000 UTC m=+0.029645978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:32:57 np0005539508 python3.9[162187]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:32:57 np0005539508 podman[162121]: 2025-11-29 06:32:57.682663769 +0000 UTC m=+1.108462718 container create 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:32:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:32:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:32:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:32:57 np0005539508 systemd[1]: Started libpod-conmon-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope.
Nov 29 01:32:57 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:32:57 np0005539508 python3.9[162340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:32:58 np0005539508 podman[162121]: 2025-11-29 06:32:58.214556504 +0000 UTC m=+1.640355413 container init 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:32:58 np0005539508 podman[162121]: 2025-11-29 06:32:58.227265638 +0000 UTC m=+1.653064547 container start 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:32:58 np0005539508 systemd[1]: libpod-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope: Deactivated successfully.
Nov 29 01:32:58 np0005539508 trusting_satoshi[162343]: 167 167
Nov 29 01:32:58 np0005539508 conmon[162343]: conmon 26acf1bfa2f77e03b707 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope/container/memory.events
Nov 29 01:32:58 np0005539508 podman[162121]: 2025-11-29 06:32:58.287414936 +0000 UTC m=+1.713213895 container attach 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:32:58 np0005539508 podman[162121]: 2025-11-29 06:32:58.288567519 +0000 UTC m=+1.714366428 container died 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:32:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:32:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1bcc9a1ad1f4b7f1d84ed977f41bc21d4ce75a967421c43bcb35efd53329093f-merged.mount: Deactivated successfully.
Nov 29 01:32:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:32:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:58.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:32:58 np0005539508 python3.9[162512]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:32:58 np0005539508 podman[162121]: 2025-11-29 06:32:58.73387009 +0000 UTC m=+2.159668989 container remove 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:32:58 np0005539508 systemd[1]: libpod-conmon-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope: Deactivated successfully.
Nov 29 01:32:58 np0005539508 podman[162546]: 2025-11-29 06:32:58.875473755 +0000 UTC m=+0.024900223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:32:59 np0005539508 podman[162546]: 2025-11-29 06:32:59.284311555 +0000 UTC m=+0.433737973 container create 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:32:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:32:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:32:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:32:59 np0005539508 systemd[1]: Started libpod-conmon-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope.
Nov 29 01:32:59 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:32:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:32:59 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:00 np0005539508 python3.9[162691]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:33:00 np0005539508 podman[162546]: 2025-11-29 06:33:00.344304857 +0000 UTC m=+1.493731265 container init 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:33:00 np0005539508 podman[162546]: 2025-11-29 06:33:00.360567171 +0000 UTC m=+1.509993589 container start 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:33:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:00.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:00 np0005539508 podman[162546]: 2025-11-29 06:33:00.941022584 +0000 UTC m=+2.090448992 container attach 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:33:01 np0005539508 xenodochial_bose[162662]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:33:01 np0005539508 xenodochial_bose[162662]: --> relative data size: 1.0
Nov 29 01:33:01 np0005539508 xenodochial_bose[162662]: --> All data devices are unavailable
Nov 29 01:33:01 np0005539508 systemd[1]: libpod-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope: Deactivated successfully.
Nov 29 01:33:01 np0005539508 podman[162546]: 2025-11-29 06:33:01.192409316 +0000 UTC m=+2.341835704 container died 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:33:01 np0005539508 python3.9[162852]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:33:01 np0005539508 systemd[1]: Reloading.
Nov 29 01:33:01 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:33:01 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:33:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:33:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:33:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790-merged.mount: Deactivated successfully.
Nov 29 01:33:01 np0005539508 podman[162546]: 2025-11-29 06:33:01.978468232 +0000 UTC m=+3.127894650 container remove 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:33:02 np0005539508 systemd[1]: libpod-conmon-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope: Deactivated successfully.
Nov 29 01:33:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:02 np0005539508 podman[163166]: 2025-11-29 06:33:02.578123732 +0000 UTC m=+0.020323922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:33:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:02 np0005539508 python3.9[163208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.243252724 +0000 UTC m=+0.685452894 container create 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 01:33:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:03 np0005539508 systemd[1]: Started libpod-conmon-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope.
Nov 29 01:33:03 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:33:03 np0005539508 python3.9[163362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.57530258 +0000 UTC m=+1.017502810 container init 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.58126712 +0000 UTC m=+1.023467290 container start 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:33:03 np0005539508 awesome_curran[163365]: 167 167
Nov 29 01:33:03 np0005539508 systemd[1]: libpod-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope: Deactivated successfully.
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.588331652 +0000 UTC m=+1.030531822 container attach 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.588905578 +0000 UTC m=+1.031105768 container died 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:33:03 np0005539508 systemd[1]: var-lib-containers-storage-overlay-06ed1fa480b9bc47727b6af792927a984659e8f346b2402a672dbc6b5b53e6d9-merged.mount: Deactivated successfully.
Nov 29 01:33:03 np0005539508 podman[163166]: 2025-11-29 06:33:03.629476257 +0000 UTC m=+1.071676427 container remove 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:33:03 np0005539508 systemd[1]: libpod-conmon-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope: Deactivated successfully.
Nov 29 01:33:03 np0005539508 podman[163390]: 2025-11-29 06:33:03.79200089 +0000 UTC m=+0.038932293 container create 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:33:03 np0005539508 systemd[1]: Started libpod-conmon-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope.
Nov 29 01:33:03 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:33:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:03 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:03 np0005539508 podman[163390]: 2025-11-29 06:33:03.854245439 +0000 UTC m=+0.101176832 container init 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:33:03 np0005539508 podman[163390]: 2025-11-29 06:33:03.86057718 +0000 UTC m=+0.107508573 container start 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:33:03 np0005539508 podman[163390]: 2025-11-29 06:33:03.863634017 +0000 UTC m=+0.110565410 container attach 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:33:03 np0005539508 podman[163390]: 2025-11-29 06:33:03.775157409 +0000 UTC m=+0.022088822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:33:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]: {
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:    "1": [
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:        {
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "devices": [
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "/dev/loop3"
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            ],
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "lv_name": "ceph_lv0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "lv_size": "7511998464",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "name": "ceph_lv0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "tags": {
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.cluster_name": "ceph",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.crush_device_class": "",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.encrypted": "0",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.osd_id": "1",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.type": "block",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:                "ceph.vdo": "0"
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            },
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "type": "block",
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:            "vg_name": "ceph_vg0"
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:        }
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]:    ]
Nov 29 01:33:04 np0005539508 infallible_driscoll[163406]: }
Nov 29 01:33:04 np0005539508 systemd[1]: libpod-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope: Deactivated successfully.
Nov 29 01:33:04 np0005539508 podman[163390]: 2025-11-29 06:33:04.668616574 +0000 UTC m=+0.915547987 container died 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 01:33:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:04.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:04 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2-merged.mount: Deactivated successfully.
Nov 29 01:33:04 np0005539508 podman[163390]: 2025-11-29 06:33:04.729869654 +0000 UTC m=+0.976801047 container remove 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 01:33:04 np0005539508 systemd[1]: libpod-conmon-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope: Deactivated successfully.
Nov 29 01:33:05 np0005539508 python3.9[163677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.338616725 +0000 UTC m=+0.033754636 container create 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:33:05 np0005539508 systemd[1]: Started libpod-conmon-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope.
Nov 29 01:33:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.323948626 +0000 UTC m=+0.019086567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.424847848 +0000 UTC m=+0.119985799 container init 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.433933368 +0000 UTC m=+0.129071289 container start 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.437538271 +0000 UTC m=+0.132676192 container attach 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:33:05 np0005539508 hardcore_shaw[163782]: 167 167
Nov 29 01:33:05 np0005539508 systemd[1]: libpod-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope: Deactivated successfully.
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.442732379 +0000 UTC m=+0.137870300 container died 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:33:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-df6f04d5e51d675a7ead722a661490a642a8ddc7f6e8764a6d70b5867361c0d2-merged.mount: Deactivated successfully.
Nov 29 01:33:05 np0005539508 podman[163744]: 2025-11-29 06:33:05.476689939 +0000 UTC m=+0.171827860 container remove 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 01:33:05 np0005539508 systemd[1]: libpod-conmon-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope: Deactivated successfully.
Nov 29 01:33:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:05 np0005539508 podman[163881]: 2025-11-29 06:33:05.635765824 +0000 UTC m=+0.026521029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:33:05 np0005539508 python3.9[163923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:06 np0005539508 podman[163881]: 2025-11-29 06:33:06.162731347 +0000 UTC m=+0.553486522 container create ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:33:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:06 np0005539508 systemd[1]: Started libpod-conmon-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope.
Nov 29 01:33:06 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:33:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:06 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:33:06 np0005539508 podman[163881]: 2025-11-29 06:33:06.600263057 +0000 UTC m=+0.991018252 container init ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:33:06 np0005539508 podman[163881]: 2025-11-29 06:33:06.614287707 +0000 UTC m=+1.005042922 container start ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:33:06 np0005539508 podman[163881]: 2025-11-29 06:33:06.618363964 +0000 UTC m=+1.009119229 container attach ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:33:06 np0005539508 python3.9[164126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:06.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:07 np0005539508 python3.9[164287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:07 np0005539508 keen_albattani[164129]: {
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:        "osd_id": 1,
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:        "type": "bluestore"
Nov 29 01:33:07 np0005539508 keen_albattani[164129]:    }
Nov 29 01:33:07 np0005539508 keen_albattani[164129]: }
Nov 29 01:33:07 np0005539508 systemd[1]: libpod-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope: Deactivated successfully.
Nov 29 01:33:07 np0005539508 podman[163881]: 2025-11-29 06:33:07.458701701 +0000 UTC m=+1.849456876 container died ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:33:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:08 np0005539508 python3.9[164466]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:33:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:08 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54-merged.mount: Deactivated successfully.
Nov 29 01:33:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:08.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:09 np0005539508 python3.9[164622]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 01:33:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:09 np0005539508 podman[163881]: 2025-11-29 06:33:09.54284411 +0000 UTC m=+3.933599285 container remove ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:33:09 np0005539508 systemd[1]: libpod-conmon-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope: Deactivated successfully.
Nov 29 01:33:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:33:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:33:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:33:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:10 np0005539508 python3.9[164775]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:33:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:10.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:33:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev cebc02be-33a7-4b95-977c-44d85ce63f94 does not exist
Nov 29 01:33:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 1602ac6f-12fb-4e13-a643-23086e3e6f46 does not exist
Nov 29 01:33:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 0482c5ba-2734-477d-bf4c-b02a02194aad does not exist
Nov 29 01:33:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:33:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:33:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:11 np0005539508 python3.9[164984]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:12 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:33:12 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:33:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:33:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:33:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:15 np0005539508 podman[164992]: 2025-11-29 06:33:15.188243118 +0000 UTC m=+0.148181204 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Nov 29 01:33:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:15.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:16 np0005539508 podman[165150]: 2025-11-29 06:33:16.200381913 +0000 UTC m=+0.114629276 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:33:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:16 np0005539508 python3.9[165196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:33:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:16.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.216 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:33:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.217 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:33:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:33:17 np0005539508 python3.9[165281]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:33:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:18.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:20.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:23.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:33:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:33:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:25.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:27.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:33:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:33:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:30.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:33:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:33:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:34.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:35.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:37.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:39.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:40.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:33:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:33:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:46 np0005539508 podman[165540]: 2025-11-29 06:33:46.182388215 +0000 UTC m=+0.134955456 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 01:33:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:46 np0005539508 podman[165594]: 2025-11-29 06:33:46.768047267 +0000 UTC m=+0.066020107 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:33:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:49.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:51.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:33:54
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:33:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:56 np0005539508 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:33:56 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:33:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:33:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:57.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:33:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:33:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:33:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:33:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:33:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:33:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:59.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:34:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:34:06 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 01:34:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:10 np0005539508 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:34:10 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:34:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:11 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 01:34:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5e0da141-7b32-4f52-b80f-47e56d5b6028 does not exist
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 482a086a-84b7-4e02-8696-9bd5a9742c88 does not exist
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 23119597-ad30-49e6-b54f-bca1dd7836fa does not exist
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:34:12 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:34:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:34:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.295556664 +0000 UTC m=+0.118088145 container create b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.204234915 +0000 UTC m=+0.026766366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:34:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:13 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:34:13 np0005539508 systemd[1]: Started libpod-conmon-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope.
Nov 29 01:34:13 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.467564808 +0000 UTC m=+0.290096289 container init b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.480859617 +0000 UTC m=+0.303391068 container start b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:34:13 np0005539508 determined_kirch[166018]: 167 167
Nov 29 01:34:13 np0005539508 systemd[1]: libpod-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope: Deactivated successfully.
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.562269643 +0000 UTC m=+0.384801124 container attach b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:34:13 np0005539508 podman[166003]: 2025-11-29 06:34:13.562942842 +0000 UTC m=+0.385474293 container died b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:13.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c1e3d042271d9109a20c1af675297af8916f0aa265268002958bfdc18c5d2a88-merged.mount: Deactivated successfully.
Nov 29 01:34:14 np0005539508 podman[166003]: 2025-11-29 06:34:14.300033599 +0000 UTC m=+1.122565090 container remove b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:34:14 np0005539508 systemd[1]: libpod-conmon-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope: Deactivated successfully.
Nov 29 01:34:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:14 np0005539508 podman[166045]: 2025-11-29 06:34:14.558571175 +0000 UTC m=+0.080996755 container create aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:34:14 np0005539508 podman[166045]: 2025-11-29 06:34:14.521512066 +0000 UTC m=+0.043937656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:14 np0005539508 systemd[1]: Started libpod-conmon-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope.
Nov 29 01:34:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:14 np0005539508 podman[166045]: 2025-11-29 06:34:14.805511439 +0000 UTC m=+0.327936989 container init aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:14 np0005539508 podman[166045]: 2025-11-29 06:34:14.814412884 +0000 UTC m=+0.336838444 container start aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 01:34:14 np0005539508 podman[166045]: 2025-11-29 06:34:14.819992703 +0000 UTC m=+0.342418263 container attach aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 01:34:15 np0005539508 eloquent_curie[166061]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:34:15 np0005539508 eloquent_curie[166061]: --> relative data size: 1.0
Nov 29 01:34:15 np0005539508 eloquent_curie[166061]: --> All data devices are unavailable
Nov 29 01:34:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:15 np0005539508 systemd[1]: libpod-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope: Deactivated successfully.
Nov 29 01:34:15 np0005539508 podman[166045]: 2025-11-29 06:34:15.605146674 +0000 UTC m=+1.127572234 container died aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8-merged.mount: Deactivated successfully.
Nov 29 01:34:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:16 np0005539508 podman[166045]: 2025-11-29 06:34:16.598686467 +0000 UTC m=+2.121112027 container remove aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:34:16 np0005539508 systemd[1]: libpod-conmon-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope: Deactivated successfully.
Nov 29 01:34:16 np0005539508 podman[166091]: 2025-11-29 06:34:16.786424441 +0000 UTC m=+0.134248687 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 01:34:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:16 np0005539508 podman[166168]: 2025-11-29 06:34:16.878674116 +0000 UTC m=+0.056565737 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:34:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.217 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:34:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:34:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:34:17 np0005539508 podman[166278]: 2025-11-29 06:34:17.242249113 +0000 UTC m=+0.021415483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:17 np0005539508 podman[166278]: 2025-11-29 06:34:17.696309114 +0000 UTC m=+0.475475434 container create 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:34:17 np0005539508 systemd[1]: Started libpod-conmon-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope.
Nov 29 01:34:17 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:18 np0005539508 podman[166278]: 2025-11-29 06:34:18.199167539 +0000 UTC m=+0.978333869 container init 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:34:18 np0005539508 podman[166278]: 2025-11-29 06:34:18.207737214 +0000 UTC m=+0.986903534 container start 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:34:18 np0005539508 sad_cori[166294]: 167 167
Nov 29 01:34:18 np0005539508 systemd[1]: libpod-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope: Deactivated successfully.
Nov 29 01:34:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:18 np0005539508 podman[166278]: 2025-11-29 06:34:18.43966291 +0000 UTC m=+1.218829230 container attach 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:34:18 np0005539508 podman[166278]: 2025-11-29 06:34:18.440647268 +0000 UTC m=+1.219813608 container died 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:34:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:18 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7086dc547eb7aca5a2af3783860155700c4387b20c7d8f35006f20b5c5e5db7a-merged.mount: Deactivated successfully.
Nov 29 01:34:19 np0005539508 podman[166278]: 2025-11-29 06:34:19.029620224 +0000 UTC m=+1.808786584 container remove 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 01:34:19 np0005539508 systemd[1]: libpod-conmon-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope: Deactivated successfully.
Nov 29 01:34:19 np0005539508 podman[166321]: 2025-11-29 06:34:19.191983462 +0000 UTC m=+0.028700761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:19 np0005539508 podman[166321]: 2025-11-29 06:34:19.443558189 +0000 UTC m=+0.280275488 container create bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:34:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:20 np0005539508 systemd[1]: Started libpod-conmon-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope.
Nov 29 01:34:20 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:20 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:20.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:20 np0005539508 podman[166321]: 2025-11-29 06:34:20.82763202 +0000 UTC m=+1.664349369 container init bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:34:20 np0005539508 podman[166321]: 2025-11-29 06:34:20.839040796 +0000 UTC m=+1.675758125 container start bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:34:21 np0005539508 podman[166321]: 2025-11-29 06:34:21.363796786 +0000 UTC m=+2.200514125 container attach bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]: {
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:    "1": [
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:        {
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "devices": [
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "/dev/loop3"
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            ],
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "lv_name": "ceph_lv0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "lv_size": "7511998464",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "name": "ceph_lv0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "tags": {
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.cluster_name": "ceph",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.crush_device_class": "",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.encrypted": "0",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.osd_id": "1",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.type": "block",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:                "ceph.vdo": "0"
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            },
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "type": "block",
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:            "vg_name": "ceph_vg0"
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:        }
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]:    ]
Nov 29 01:34:21 np0005539508 festive_montalcini[166338]: }
Nov 29 01:34:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:21 np0005539508 systemd[1]: libpod-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope: Deactivated successfully.
Nov 29 01:34:21 np0005539508 podman[166321]: 2025-11-29 06:34:21.617838654 +0000 UTC m=+2.454555993 container died bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:34:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:22.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:24.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b-merged.mount: Deactivated successfully.
Nov 29 01:34:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:26.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:28 np0005539508 podman[166321]: 2025-11-29 06:34:28.087094096 +0000 UTC m=+8.923811395 container remove bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:28 np0005539508 systemd[1]: libpod-conmon-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope: Deactivated successfully.
Nov 29 01:34:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:28 np0005539508 podman[168564]: 2025-11-29 06:34:28.704037978 +0000 UTC m=+0.050507223 container create a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:28 np0005539508 podman[168564]: 2025-11-29 06:34:28.680597479 +0000 UTC m=+0.027066744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:34:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:34:28 np0005539508 systemd[1]: Started libpod-conmon-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope.
Nov 29 01:34:29 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:34:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:29.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:34:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:34:29 np0005539508 podman[168564]: 2025-11-29 06:34:29.619588904 +0000 UTC m=+0.966058219 container init a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:34:29 np0005539508 podman[168564]: 2025-11-29 06:34:29.632381819 +0000 UTC m=+0.978851104 container start a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:34:29 np0005539508 tender_ardinghelli[168773]: 167 167
Nov 29 01:34:29 np0005539508 systemd[1]: libpod-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope: Deactivated successfully.
Nov 29 01:34:30 np0005539508 podman[168564]: 2025-11-29 06:34:30.024030947 +0000 UTC m=+1.370500232 container attach a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:34:30 np0005539508 podman[168564]: 2025-11-29 06:34:30.024736818 +0000 UTC m=+1.371206103 container died a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:34:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:31 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e7a7583cb7c93af5d273a861c6b8db11944d27243347aa74d5f946b40e5288d4-merged.mount: Deactivated successfully.
Nov 29 01:34:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:34 np0005539508 podman[168564]: 2025-11-29 06:34:34.691139196 +0000 UTC m=+6.037608441 container remove a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:34:34 np0005539508 systemd[1]: libpod-conmon-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope: Deactivated successfully.
Nov 29 01:34:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:35 np0005539508 podman[171900]: 2025-11-29 06:34:34.92497222 +0000 UTC m=+0.029519501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:34:35 np0005539508 podman[171900]: 2025-11-29 06:34:35.559696999 +0000 UTC m=+0.664244290 container create a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:34:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:36 np0005539508 systemd[1]: Started libpod-conmon-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope.
Nov 29 01:34:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:36.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:36 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:34:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:36 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:34:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:37 np0005539508 podman[171900]: 2025-11-29 06:34:37.723206548 +0000 UTC m=+2.827753859 container init a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:34:37 np0005539508 podman[171900]: 2025-11-29 06:34:37.735164793 +0000 UTC m=+2.839712074 container start a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:34:38 np0005539508 podman[171900]: 2025-11-29 06:34:38.410617106 +0000 UTC m=+3.515164447 container attach a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 01:34:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]: {
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:        "osd_id": 1,
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:        "type": "bluestore"
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]:    }
Nov 29 01:34:38 np0005539508 jolly_dirac[172871]: }
Nov 29 01:34:38 np0005539508 systemd[1]: libpod-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope: Deactivated successfully.
Nov 29 01:34:38 np0005539508 podman[171900]: 2025-11-29 06:34:38.634864384 +0000 UTC m=+3.739411645 container died a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:34:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:38.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:39.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205-merged.mount: Deactivated successfully.
Nov 29 01:34:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:40.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:41.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:42 np0005539508 podman[171900]: 2025-11-29 06:34:42.159025979 +0000 UTC m=+7.263573230 container remove a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:34:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:34:42 np0005539508 systemd[1]: libpod-conmon-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope: Deactivated successfully.
Nov 29 01:34:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:34:42 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:42 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev df6a06d4-81be-400b-abd6-eeb9f7eb311e does not exist
Nov 29 01:34:42 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 68626a8a-066f-4c4a-98df-956a9b37e242 does not exist
Nov 29 01:34:42 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 0c4c8572-f509-4d6f-9780-055b47a88c47 does not exist
Nov 29 01:34:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:42.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:43.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:44 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:34:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:44.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:46.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:47 np0005539508 podman[178188]: 2025-11-29 06:34:47.123063311 +0000 UTC m=+0.070692957 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:34:47 np0005539508 podman[178196]: 2025-11-29 06:34:47.174352918 +0000 UTC m=+0.121724387 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 01:34:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:34:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:34:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:34:54
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:34:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:34:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:58.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:34:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:34:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:34:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:34:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:04.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:10.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:35:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:35:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.219 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:35:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:35:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:35:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:35:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:35:20 np0005539508 podman[183684]: 2025-11-29 06:35:20.217472489 +0000 UTC m=+0.856818457 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:35:20 np0005539508 podman[183685]: 2025-11-29 06:35:20.294264771 +0000 UTC m=+0.938025226 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:35:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:20.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:24 np0005539508 kernel: SELinux:  Converting 2772 SID table entries...
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:35:24 np0005539508 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:35:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:24.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:35:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:26.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:35:27 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 01:35:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:28.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:35:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:35:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:30 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:35:30 np0005539508 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 01:35:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:30.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:32.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:35:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:35:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:42.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:35:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:35:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:35:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:35:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:35:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:44.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:48.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:35:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:49.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:35:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:50.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:35:51 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 12dfc784-8693-44f4-957e-6b10f2652c9e does not exist
Nov 29 01:35:51 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6040f888-d17f-465a-a923-562bc5d2a68d does not exist
Nov 29 01:35:51 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9bcf0475-7f10-465a-b936-9ea7241fe5cd does not exist
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:35:51 np0005539508 podman[184117]: 2025-11-29 06:35:51.146570993 +0000 UTC m=+0.087242213 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:35:51 np0005539508 podman[184120]: 2025-11-29 06:35:51.187063639 +0000 UTC m=+0.128983635 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 01:35:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:51.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:51 np0005539508 podman[184304]: 2025-11-29 06:35:51.665135078 +0000 UTC m=+0.028524813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:35:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:35:51 np0005539508 podman[184304]: 2025-11-29 06:35:51.975685891 +0000 UTC m=+0.339075526 container create 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:35:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:52 np0005539508 systemd[1]: Started libpod-conmon-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope.
Nov 29 01:35:52 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:35:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:35:52 np0005539508 podman[184304]: 2025-11-29 06:35:52.834854625 +0000 UTC m=+1.198244280 container init 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:35:52 np0005539508 podman[184304]: 2025-11-29 06:35:52.84196949 +0000 UTC m=+1.205359125 container start 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:35:52 np0005539508 cranky_tesla[184372]: 167 167
Nov 29 01:35:52 np0005539508 systemd[1]: libpod-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope: Deactivated successfully.
Nov 29 01:35:52 np0005539508 podman[184304]: 2025-11-29 06:35:52.854598104 +0000 UTC m=+1.217987759 container attach 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:35:52 np0005539508 podman[184304]: 2025-11-29 06:35:52.855415868 +0000 UTC m=+1.218805533 container died 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:35:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:52.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:53 np0005539508 systemd[1]: var-lib-containers-storage-overlay-59a088a850afa710d6692b90becdfa1b4776df8c1b010a127e90057b539c5387-merged.mount: Deactivated successfully.
Nov 29 01:35:53 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:35:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:53.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:35:54
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups']
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:35:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:55.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:56.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:35:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:35:58 np0005539508 podman[184304]: 2025-11-29 06:35:58.26672655 +0000 UTC m=+6.630116235 container remove 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:35:58 np0005539508 systemd[1]: libpod-conmon-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope: Deactivated successfully.
Nov 29 01:35:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:35:58 np0005539508 podman[184436]: 2025-11-29 06:35:58.435240783 +0000 UTC m=+0.026227556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:35:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:35:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:35:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:35:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:00 np0005539508 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 01:36:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:03 np0005539508 podman[184436]: 2025-11-29 06:36:03.0941922 +0000 UTC m=+4.685178933 container create d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 01:36:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:04 np0005539508 systemd[1]: Started libpod-conmon-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope.
Nov 29 01:36:04 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:36:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:04 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:04 np0005539508 podman[184436]: 2025-11-29 06:36:04.760394048 +0000 UTC m=+6.351380811 container init d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:36:04 np0005539508 podman[184436]: 2025-11-29 06:36:04.772384403 +0000 UTC m=+6.363371106 container start d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 01:36:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:05.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:05 np0005539508 upbeat_babbage[184459]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:36:05 np0005539508 upbeat_babbage[184459]: --> relative data size: 1.0
Nov 29 01:36:05 np0005539508 upbeat_babbage[184459]: --> All data devices are unavailable
Nov 29 01:36:05 np0005539508 systemd[1]: libpod-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope: Deactivated successfully.
Nov 29 01:36:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:06 np0005539508 podman[184436]: 2025-11-29 06:36:06.707016959 +0000 UTC m=+8.298003652 container attach d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:36:06 np0005539508 podman[184436]: 2025-11-29 06:36:06.710209391 +0000 UTC m=+8.301196124 container died d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:36:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:07.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:36:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2778 writes, 12K keys, 2778 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 2778 writes, 2778 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1075 writes, 4445 keys, 1075 commit groups, 1.0 writes per commit group, ingest: 7.66 MB, 0.01 MB/s#012Interval WAL: 1075 writes, 1075 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.0      1.01              0.03         4    0.253       0      0       0.0       0.0#012  L6      1/0    9.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     13.7     12.0      2.32              0.09         3    0.774     12K   1290       0.0       0.0#012 Sum      1/0    9.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1      9.6     12.3      3.33              0.12         7    0.476     12K   1290       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1      9.6     12.3      3.33              0.12         6    0.555     12K   1290       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     13.7     12.0      2.32              0.09         3    0.774     12K   1290       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.0      1.01              0.03         3    0.335       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.013, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.04 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 3.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.03 GB read, 0.05 MB/s read, 3.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 1.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(69,1.20 MB,0.395885%) FilterBlock(8,44.11 KB,0.0141696%) IndexBlock(8,98.33 KB,0.0315867%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 01:36:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:08 np0005539508 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 01:36:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49-merged.mount: Deactivated successfully.
Nov 29 01:36:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:12 np0005539508 podman[184436]: 2025-11-29 06:36:12.189839353 +0000 UTC m=+13.780826086 container remove d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:36:12 np0005539508 systemd[1]: libpod-conmon-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope: Deactivated successfully.
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:12 np0005539508 podman[184697]: 2025-11-29 06:36:12.790277855 +0000 UTC m=+0.023580280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:36:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:36:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:13.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:13 np0005539508 podman[184697]: 2025-11-29 06:36:13.575183821 +0000 UTC m=+0.808486276 container create 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:36:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:13 np0005539508 systemd[1]: Started libpod-conmon-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope.
Nov 29 01:36:13 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:36:14 np0005539508 podman[184697]: 2025-11-29 06:36:14.456435481 +0000 UTC m=+1.689737986 container init 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:36:14 np0005539508 podman[184697]: 2025-11-29 06:36:14.469821977 +0000 UTC m=+1.703124402 container start 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:36:14 np0005539508 systemd[1]: libpod-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope: Deactivated successfully.
Nov 29 01:36:14 np0005539508 silly_robinson[184714]: 167 167
Nov 29 01:36:14 np0005539508 conmon[184714]: conmon 0064365d7c36114eef78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope/container/memory.events
Nov 29 01:36:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:14 np0005539508 podman[184697]: 2025-11-29 06:36:14.610423476 +0000 UTC m=+1.843725921 container attach 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:36:14 np0005539508 podman[184697]: 2025-11-29 06:36:14.611718723 +0000 UTC m=+1.845021138 container died 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:36:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1c829df6f89bbee31f04e7d99237008dc55c347f34ca4ae896fd8665e458fd49-merged.mount: Deactivated successfully.
Nov 29 01:36:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:17 np0005539508 podman[184697]: 2025-11-29 06:36:17.203001631 +0000 UTC m=+4.436304076 container remove 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 01:36:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:36:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.221 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:36:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.221 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:36:17 np0005539508 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 01:36:17 np0005539508 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 01:36:17 np0005539508 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 01:36:17 np0005539508 systemd[1]: sshd.service: Consumed 13.616s CPU time, read 32.0K from disk, written 368.0K to disk.
Nov 29 01:36:17 np0005539508 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 01:36:17 np0005539508 systemd[1]: Stopping sshd-keygen.target...
Nov 29 01:36:17 np0005539508 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:36:17 np0005539508 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:36:17 np0005539508 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:36:17 np0005539508 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:36:17 np0005539508 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:36:17 np0005539508 systemd[1]: libpod-conmon-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope: Deactivated successfully.
Nov 29 01:36:17 np0005539508 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:36:17 np0005539508 podman[185380]: 2025-11-29 06:36:17.435505268 +0000 UTC m=+0.032787176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:36:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:18 np0005539508 podman[185380]: 2025-11-29 06:36:18.195536536 +0000 UTC m=+0.792818364 container create 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:36:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:18 np0005539508 systemd[1]: Started libpod-conmon-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope.
Nov 29 01:36:18 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:36:18 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:18 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:18 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:18 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:18 np0005539508 podman[185380]: 2025-11-29 06:36:18.768332563 +0000 UTC m=+1.365614391 container init 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:36:18 np0005539508 podman[185380]: 2025-11-29 06:36:18.7772744 +0000 UTC m=+1.374556218 container start 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:36:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:19 np0005539508 podman[185380]: 2025-11-29 06:36:19.101058535 +0000 UTC m=+1.698340403 container attach 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 01:36:19 np0005539508 objective_carver[185510]: {
Nov 29 01:36:19 np0005539508 objective_carver[185510]:    "1": [
Nov 29 01:36:19 np0005539508 objective_carver[185510]:        {
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "devices": [
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "/dev/loop3"
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            ],
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "lv_name": "ceph_lv0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "lv_size": "7511998464",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "name": "ceph_lv0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "tags": {
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.cluster_name": "ceph",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.crush_device_class": "",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.encrypted": "0",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.osd_id": "1",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.type": "block",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:                "ceph.vdo": "0"
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            },
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "type": "block",
Nov 29 01:36:19 np0005539508 objective_carver[185510]:            "vg_name": "ceph_vg0"
Nov 29 01:36:19 np0005539508 objective_carver[185510]:        }
Nov 29 01:36:19 np0005539508 objective_carver[185510]:    ]
Nov 29 01:36:19 np0005539508 objective_carver[185510]: }
Nov 29 01:36:19 np0005539508 systemd[1]: libpod-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope: Deactivated successfully.
Nov 29 01:36:19 np0005539508 podman[185380]: 2025-11-29 06:36:19.632332875 +0000 UTC m=+2.229614723 container died 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:36:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:20 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50-merged.mount: Deactivated successfully.
Nov 29 01:36:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:21 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:36:21 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:36:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:22 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:22 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:22 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:23 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:36:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.003000086s ======
Nov 29 01:36:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Nov 29 01:36:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:28 np0005539508 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 01:36:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:29.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:36:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:36:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:31 np0005539508 podman[185380]: 2025-11-29 06:36:31.271501319 +0000 UTC m=+13.868783177 container remove 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 01:36:31 np0005539508 systemd[1]: libpod-conmon-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope: Deactivated successfully.
Nov 29 01:36:31 np0005539508 podman[185613]: 2025-11-29 06:36:31.356750744 +0000 UTC m=+10.137432665 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:36:31 np0005539508 podman[185623]: 2025-11-29 06:36:31.47986042 +0000 UTC m=+10.253879960 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 01:36:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.6906 seconds
Nov 29 01:36:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:32 np0005539508 podman[186963]: 2025-11-29 06:36:32.110049059 +0000 UTC m=+0.040516668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:36:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:33 np0005539508 podman[186963]: 2025-11-29 06:36:33.111609574 +0000 UTC m=+1.042077103 container create 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:36:33 np0005539508 systemd[1]: Started libpod-conmon-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope.
Nov 29 01:36:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:36:33 np0005539508 podman[186963]: 2025-11-29 06:36:33.573357301 +0000 UTC m=+1.503824930 container init 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:36:33 np0005539508 podman[186963]: 2025-11-29 06:36:33.586260313 +0000 UTC m=+1.516727852 container start 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:36:33 np0005539508 blissful_goodall[187931]: 167 167
Nov 29 01:36:33 np0005539508 systemd[1]: libpod-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope: Deactivated successfully.
Nov 29 01:36:33 np0005539508 podman[186963]: 2025-11-29 06:36:33.636420728 +0000 UTC m=+1.566888297 container attach 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:36:33 np0005539508 podman[186963]: 2025-11-29 06:36:33.637474848 +0000 UTC m=+1.567942407 container died 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:36:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:33.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-ee6f149acb625a135be5f68a685a1fa8bcc020eb3afe60d05317e6a34f055d3e-merged.mount: Deactivated successfully.
Nov 29 01:36:34 np0005539508 podman[186963]: 2025-11-29 06:36:34.121498058 +0000 UTC m=+2.051965587 container remove 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:36:34 np0005539508 systemd[1]: libpod-conmon-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope: Deactivated successfully.
Nov 29 01:36:34 np0005539508 podman[188691]: 2025-11-29 06:36:34.277316275 +0000 UTC m=+0.026184685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:36:34 np0005539508 podman[188691]: 2025-11-29 06:36:34.418221223 +0000 UTC m=+0.167089653 container create b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:36:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:34 np0005539508 systemd[1]: Started libpod-conmon-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope.
Nov 29 01:36:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:36:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:36:34 np0005539508 podman[188691]: 2025-11-29 06:36:34.593571263 +0000 UTC m=+0.342439703 container init b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:36:34 np0005539508 podman[188691]: 2025-11-29 06:36:34.602603574 +0000 UTC m=+0.351471964 container start b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 01:36:34 np0005539508 podman[188691]: 2025-11-29 06:36:34.608552085 +0000 UTC m=+0.357420495 container attach b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:36:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]: {
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:        "osd_id": 1,
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:        "type": "bluestore"
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]:    }
Nov 29 01:36:35 np0005539508 wonderful_banzai[189018]: }
Nov 29 01:36:35 np0005539508 systemd[1]: libpod-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope: Deactivated successfully.
Nov 29 01:36:35 np0005539508 podman[188691]: 2025-11-29 06:36:35.488965741 +0000 UTC m=+1.237834131 container died b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:36:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1-merged.mount: Deactivated successfully.
Nov 29 01:36:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:37 np0005539508 podman[188691]: 2025-11-29 06:36:37.376779229 +0000 UTC m=+3.125647659 container remove b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:36:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:36:37 np0005539508 systemd[1]: libpod-conmon-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope: Deactivated successfully.
Nov 29 01:36:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:36:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:36:37 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:36:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:36:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:36:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:36:40 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a3fa91c9-560a-4b38-9c60-2fcbdb83f66e does not exist
Nov 29 01:36:40 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev e8218213-fa7f-408b-bee5-e1aa80e95216 does not exist
Nov 29 01:36:40 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f53883b5-e089-42a3-93aa-71c1bfd0eb44 does not exist
Nov 29 01:36:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:41 np0005539508 python3.9[194353]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:36:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:41 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:36:42 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:42 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:42 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:42 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:36:42 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:36:42 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 12.072s CPU time.
Nov 29 01:36:42 np0005539508 systemd[1]: run-r82c68b860e11417faf59952e344d78d4.service: Deactivated successfully.
Nov 29 01:36:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:43 np0005539508 python3.9[194790]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:36:43 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:43 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:43 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:44 np0005539508 python3.9[194980]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:36:44 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:44 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:44 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:46 np0005539508 python3.9[195171]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:36:46 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:46 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:46 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:47 np0005539508 python3.9[195362]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:47 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:47 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:47 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:47.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:49 np0005539508 python3.9[195602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:49 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:49 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:49 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:50 np0005539508 python3.9[195795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:50 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:50 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:50 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:51.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:51 np0005539508 python3.9[195991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:52 np0005539508 python3.9[196146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:52 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:52 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:52 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:53 np0005539508 python3.9[196338]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 01:36:53 np0005539508 systemd[1]: Reloading.
Nov 29 01:36:53 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:36:54 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:36:54
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:36:54 np0005539508 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:54 np0005539508 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:36:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:55.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:36:57 np0005539508 python3.9[196532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:36:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:57.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.469551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217469638, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2187, "num_deletes": 251, "total_data_size": 4253493, "memory_usage": 4343856, "flush_reason": "Manual Compaction"}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 01:36:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:57.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217775054, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 4155632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10952, "largest_seqno": 13138, "table_properties": {"data_size": 4145714, "index_size": 6348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19625, "raw_average_key_size": 20, "raw_value_size": 4125981, "raw_average_value_size": 4210, "num_data_blocks": 284, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397891, "oldest_key_time": 1764397891, "file_creation_time": 1764398217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 305542 microseconds, and 24869 cpu microseconds.
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.775095) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 4155632 bytes OK
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.775113) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803773) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803827) EVENT_LOG_v1 {"time_micros": 1764398217803817, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4244737, prev total WAL file size 4244737, number of live WAL files 2.
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.805522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(4058KB)], [26(9323KB)]
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217805644, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 13702761, "oldest_snapshot_seqno": -1}
Nov 29 01:36:57 np0005539508 python3.9[196688]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4461 keys, 10609626 bytes, temperature: kUnknown
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217921213, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 10609626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10574832, "index_size": 22524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 109117, "raw_average_key_size": 24, "raw_value_size": 10489312, "raw_average_value_size": 2351, "num_data_blocks": 972, "num_entries": 4461, "num_filter_entries": 4461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.921564) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 10609626 bytes
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.922974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.4 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 9.1 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 4979, records dropped: 518 output_compression: NoCompression
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.923006) EVENT_LOG_v1 {"time_micros": 1764398217922990, "job": 10, "event": "compaction_finished", "compaction_time_micros": 115702, "compaction_time_cpu_micros": 52761, "output_level": 6, "num_output_files": 1, "total_output_size": 10609626, "num_input_records": 4979, "num_output_records": 4461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217924301, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217927572, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.805334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:36:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:36:58 np0005539508 python3.9[196843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:36:59 np0005539508 python3.9[196999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:36:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:36:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:36:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:59.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:01 np0005539508 python3.9[197155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:01 np0005539508 podman[197157]: 2025-11-29 06:37:01.803980196 +0000 UTC m=+0.071397157 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:37:01 np0005539508 podman[197158]: 2025-11-29 06:37:01.837864962 +0000 UTC m=+0.098348241 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 01:37:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:02 np0005539508 python3.9[197354]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:03 np0005539508 python3.9[197510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:04 np0005539508 python3.9[197665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:05 np0005539508 python3.9[197820]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:06 np0005539508 python3.9[197976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:07 np0005539508 python3.9[198131]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:08 np0005539508 python3.9[198287]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:09 np0005539508 python3.9[198492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:10 np0005539508 python3.9[198648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 01:37:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:11.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:11 np0005539508 python3.9[198804]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:11.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:11 np0005539508 python3.9[198956]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:12 np0005539508 python3.9[199108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:37:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:37:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:13.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:13 np0005539508 python3.9[199261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:13.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:13 np0005539508 python3.9[199413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:14 np0005539508 python3.9[199565]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:37:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:15 np0005539508 python3.9[199718]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:16 np0005539508 python3.9[199843]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398234.9088657-1630-170388195490610/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:16 np0005539508 python3.9[199995]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.222 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:37:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.223 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:37:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.223 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:37:17 np0005539508 python3.9[200121]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398236.380556-1630-47103561422678/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:18 np0005539508 python3.9[200273]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:18 np0005539508 python3.9[200398]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398237.6762164-1630-50679995799330/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:19 np0005539508 python3.9[200553]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:20 np0005539508 python3.9[200678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398238.9253066-1630-83226004075043/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:20 np0005539508 python3.9[200830]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:21 np0005539508 python3.9[200956]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398240.220464-1630-210046959776749/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:22 np0005539508 python3.9[201108]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:22 np0005539508 python3.9[201233]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398241.5333674-1630-16078631982296/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:23.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:23 np0005539508 python3.9[201386]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:23 np0005539508 python3.9[201511]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398242.7243857-1630-84063304560593/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:24 np0005539508 python3.9[201663]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:25 np0005539508 python3.9[201790]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398243.9052818-1630-62585285579945/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:27 np0005539508 python3.9[201944]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 01:37:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:27.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:29.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:37:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:37:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:29.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:31 np0005539508 python3.9[202148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:31 np0005539508 python3.9[202301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:32 np0005539508 podman[202349]: 2025-11-29 06:37:32.125062927 +0000 UTC m=+0.083476259 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:37:32 np0005539508 podman[202350]: 2025-11-29 06:37:32.162860406 +0000 UTC m=+0.118753564 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 01:37:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:32 np0005539508 python3.9[202498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:33 np0005539508 python3.9[202651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:34 np0005539508 python3.9[202803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:35 np0005539508 python3.9[202955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:35 np0005539508 python3.9[203108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:36 np0005539508 python3.9[203262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:37 np0005539508 python3.9[203415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:38 np0005539508 python3.9[203567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:39 np0005539508 python3.9[203720]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:40 np0005539508 python3.9[203872]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:40 np0005539508 python3.9[204024]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:41 np0005539508 python3.9[204181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:41.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:42 np0005539508 python3.9[204509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:37:42 np0005539508 podman[204501]: 2025-11-29 06:37:42.697042083 +0000 UTC m=+0.830715326 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:37:42 np0005539508 podman[204501]: 2025-11-29 06:37:42.797267007 +0000 UTC m=+0.930940210 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 01:37:42 np0005539508 python3.9[204639]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398261.5576751-2293-47691541546331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:37:43 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:43 np0005539508 python3.9[204813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:43 np0005539508 podman[205026]: 2025-11-29 06:37:43.830646036 +0000 UTC m=+0.050885301 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:37:43 np0005539508 podman[205026]: 2025-11-29 06:37:43.845193959 +0000 UTC m=+0.065433194 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:37:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:43.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:44 np0005539508 python3.9[205075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398262.9884467-2293-215288100640885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:44 np0005539508 podman[205118]: 2025-11-29 06:37:44.083839688 +0000 UTC m=+0.065549197 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 01:37:44 np0005539508 podman[205118]: 2025-11-29 06:37:44.129333441 +0000 UTC m=+0.111042940 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, distribution-scope=public, version=2.2.4, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Nov 29 01:37:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:37:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:44 np0005539508 python3.9[205303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:45.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:45 np0005539508 python3.9[205427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398264.1918023-2293-14866083655403/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:37:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:45.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:45 np0005539508 python3.9[205579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:46 np0005539508 python3.9[205751]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398265.4061613-2293-173779313235720/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:37:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:37:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:47 np0005539508 python3.9[205988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:47 np0005539508 python3.9[206111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398266.7080464-2293-32035291134779/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:47.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 46568f9b-a8d3-4397-baa9-2f892fa0855f does not exist
Nov 29 01:37:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 827cea6d-ee1d-44fa-b0ea-6a8e28c37f99 does not exist
Nov 29 01:37:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev e6172c4b-9c50-4e47-a013-b13fd5e628d0 does not exist
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:37:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:37:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.608153674 +0000 UTC m=+0.064839657 container create 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:37:48 np0005539508 python3.9[206376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:48 np0005539508 systemd[1]: Started libpod-conmon-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope.
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.568567013 +0000 UTC m=+0.025253006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:37:48 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.758985659 +0000 UTC m=+0.215671652 container init 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.767209918 +0000 UTC m=+0.223895871 container start 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:37:48 np0005539508 competent_antonelli[206421]: 167 167
Nov 29 01:37:48 np0005539508 systemd[1]: libpod-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope: Deactivated successfully.
Nov 29 01:37:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:37:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:37:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.896633691 +0000 UTC m=+0.353319644 container attach 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.89795213 +0000 UTC m=+0.354638093 container died 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:37:48 np0005539508 systemd[1]: var-lib-containers-storage-overlay-17289264d4a4e08b437cb91e5fe2283757089590f5f85d1ed7ab1f40e8695725-merged.mount: Deactivated successfully.
Nov 29 01:37:48 np0005539508 podman[206404]: 2025-11-29 06:37:48.942363991 +0000 UTC m=+0.399049934 container remove 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:37:48 np0005539508 systemd[1]: libpod-conmon-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope: Deactivated successfully.
Nov 29 01:37:49 np0005539508 podman[206618]: 2025-11-29 06:37:49.098722968 +0000 UTC m=+0.028527601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:37:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:49 np0005539508 python3.9[206612]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398268.0563-2293-93309710320684/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:49 np0005539508 podman[206618]: 2025-11-29 06:37:49.252289173 +0000 UTC m=+0.182093766 container create 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:49 np0005539508 systemd[1]: Started libpod-conmon-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope.
Nov 29 01:37:49 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:37:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:49 np0005539508 podman[206618]: 2025-11-29 06:37:49.416662022 +0000 UTC m=+0.346466625 container init 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:37:49 np0005539508 podman[206618]: 2025-11-29 06:37:49.429965499 +0000 UTC m=+0.359770132 container start 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:37:49 np0005539508 podman[206618]: 2025-11-29 06:37:49.434087999 +0000 UTC m=+0.363892632 container attach 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 01:37:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:49.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:49 np0005539508 python3.9[206791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:50 np0005539508 quirky_ride[206659]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:37:50 np0005539508 quirky_ride[206659]: --> relative data size: 1.0
Nov 29 01:37:50 np0005539508 quirky_ride[206659]: --> All data devices are unavailable
Nov 29 01:37:50 np0005539508 systemd[1]: libpod-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope: Deactivated successfully.
Nov 29 01:37:50 np0005539508 podman[206618]: 2025-11-29 06:37:50.249676085 +0000 UTC m=+1.179480698 container died 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:37:50 np0005539508 systemd[1]: var-lib-containers-storage-overlay-246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e-merged.mount: Deactivated successfully.
Nov 29 01:37:50 np0005539508 python3.9[206938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398269.4008389-2293-100870386556352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:50 np0005539508 podman[206618]: 2025-11-29 06:37:50.532644643 +0000 UTC m=+1.462449236 container remove 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:37:50 np0005539508 systemd[1]: libpod-conmon-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope: Deactivated successfully.
Nov 29 01:37:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:51 np0005539508 python3.9[207206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:51.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.186556957 +0000 UTC m=+0.059585534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.321351617 +0000 UTC m=+0.194380184 container create 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:51 np0005539508 systemd[1]: Started libpod-conmon-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope.
Nov 29 01:37:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.880700111 +0000 UTC m=+0.753728668 container init 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:37:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:51.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.897218491 +0000 UTC m=+0.770247058 container start 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 01:37:51 np0005539508 python3.9[207367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398270.6568413-2293-233160472702351/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:51 np0005539508 hungry_nightingale[207373]: 167 167
Nov 29 01:37:51 np0005539508 systemd[1]: libpod-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope: Deactivated successfully.
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.996756626 +0000 UTC m=+0.869785213 container attach 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:37:51 np0005539508 podman[207234]: 2025-11-29 06:37:51.997383714 +0000 UTC m=+0.870412261 container died 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 01:37:52 np0005539508 systemd[1]: var-lib-containers-storage-overlay-404472b18cfc44c6f0060a7db91356ab91df3e380621e409c17dd1eaf4a601be-merged.mount: Deactivated successfully.
Nov 29 01:37:52 np0005539508 podman[207234]: 2025-11-29 06:37:52.215765224 +0000 UTC m=+1.088793791 container remove 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:37:52 np0005539508 systemd[1]: libpod-conmon-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope: Deactivated successfully.
Nov 29 01:37:52 np0005539508 podman[207520]: 2025-11-29 06:37:52.42991534 +0000 UTC m=+0.056918976 container create 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:37:52 np0005539508 systemd[1]: Started libpod-conmon-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope.
Nov 29 01:37:52 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:37:52 np0005539508 podman[207520]: 2025-11-29 06:37:52.414933274 +0000 UTC m=+0.041936930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:37:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:37:52 np0005539508 podman[207520]: 2025-11-29 06:37:52.523937784 +0000 UTC m=+0.150941430 container init 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:52 np0005539508 podman[207520]: 2025-11-29 06:37:52.531854824 +0000 UTC m=+0.158858460 container start 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:52 np0005539508 podman[207520]: 2025-11-29 06:37:52.535888291 +0000 UTC m=+0.162891957 container attach 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:37:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:52 np0005539508 python3.9[207561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:53 np0005539508 python3.9[207692]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398272.0888188-2293-54065289805619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]: {
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:    "1": [
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:        {
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "devices": [
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "/dev/loop3"
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            ],
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "lv_name": "ceph_lv0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "lv_size": "7511998464",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "name": "ceph_lv0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "tags": {
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.cluster_name": "ceph",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.crush_device_class": "",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.encrypted": "0",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.osd_id": "1",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.type": "block",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:                "ceph.vdo": "0"
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            },
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "type": "block",
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:            "vg_name": "ceph_vg0"
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:        }
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]:    ]
Nov 29 01:37:53 np0005539508 ecstatic_shirley[207564]: }
Nov 29 01:37:53 np0005539508 systemd[1]: libpod-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope: Deactivated successfully.
Nov 29 01:37:53 np0005539508 podman[207697]: 2025-11-29 06:37:53.410207705 +0000 UTC m=+0.027280094 container died 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:37:53 np0005539508 systemd[1]: var-lib-containers-storage-overlay-76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0-merged.mount: Deactivated successfully.
Nov 29 01:37:53 np0005539508 podman[207697]: 2025-11-29 06:37:53.804660104 +0000 UTC m=+0.421732473 container remove 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 01:37:53 np0005539508 systemd[1]: libpod-conmon-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope: Deactivated successfully.
Nov 29 01:37:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:37:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:37:54
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images']
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:37:54 np0005539508 python3.9[207936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:54 np0005539508 podman[208006]: 2025-11-29 06:37:54.430198314 +0000 UTC m=+0.028702806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:37:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:54 np0005539508 podman[208006]: 2025-11-29 06:37:54.626476451 +0000 UTC m=+0.224980923 container create 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:37:55 np0005539508 python3.9[208142]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398273.7118747-2293-42781858481385/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:55 np0005539508 systemd[1]: Started libpod-conmon-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope.
Nov 29 01:37:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:37:55 np0005539508 podman[208006]: 2025-11-29 06:37:55.546535814 +0000 UTC m=+1.145040296 container init 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:37:55 np0005539508 podman[208006]: 2025-11-29 06:37:55.553333632 +0000 UTC m=+1.151838094 container start 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:37:55 np0005539508 competent_bouman[208169]: 167 167
Nov 29 01:37:55 np0005539508 systemd[1]: libpod-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope: Deactivated successfully.
Nov 29 01:37:55 np0005539508 podman[208006]: 2025-11-29 06:37:55.79193195 +0000 UTC m=+1.390436402 container attach 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:37:55 np0005539508 podman[208006]: 2025-11-29 06:37:55.792327711 +0000 UTC m=+1.390832163 container died 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:37:55 np0005539508 python3.9[208301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:56 np0005539508 python3.9[208436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398275.2703915-2293-243300696722071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:57 np0005539508 python3.9[208589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:37:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:37:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:37:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-2e6b22f195a221270defbf517d7aaec0766c79bdee16f52e29357dd48131b6fa-merged.mount: Deactivated successfully.
Nov 29 01:37:58 np0005539508 python3.9[208715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398276.7939715-2293-81764984725639/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:37:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:37:59 np0005539508 python3.9[208867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:37:59 np0005539508 python3.9[208991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398278.4193048-2293-109820329254071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:37:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:37:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:37:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:00 np0005539508 podman[208006]: 2025-11-29 06:38:00.340641924 +0000 UTC m=+5.939146366 container remove 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:38:00 np0005539508 systemd[1]: libpod-conmon-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope: Deactivated successfully.
Nov 29 01:38:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:00 np0005539508 python3.9[209145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:00 np0005539508 podman[209151]: 2025-11-29 06:38:00.517742544 +0000 UTC m=+0.028396836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:38:00 np0005539508 podman[209151]: 2025-11-29 06:38:00.734141857 +0000 UTC m=+0.244796089 container create c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:38:00 np0005539508 systemd[1]: Started libpod-conmon-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope.
Nov 29 01:38:00 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:38:00 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:38:00 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:38:00 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:38:00 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:38:00 np0005539508 podman[209151]: 2025-11-29 06:38:00.887936288 +0000 UTC m=+0.398590510 container init c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 01:38:00 np0005539508 podman[209151]: 2025-11-29 06:38:00.901319968 +0000 UTC m=+0.411974180 container start c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:38:00 np0005539508 podman[209151]: 2025-11-29 06:38:00.905030765 +0000 UTC m=+0.415685057 container attach c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:38:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:01 np0005539508 python3.9[209296]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398280.0263493-2293-122910881362166/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:01 np0005539508 tender_babbage[209219]: {
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:        "osd_id": 1,
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:        "type": "bluestore"
Nov 29 01:38:01 np0005539508 tender_babbage[209219]:    }
Nov 29 01:38:01 np0005539508 tender_babbage[209219]: }
Nov 29 01:38:01 np0005539508 systemd[1]: libpod-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope: Deactivated successfully.
Nov 29 01:38:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:01 np0005539508 podman[209436]: 2025-11-29 06:38:01.966633035 +0000 UTC m=+0.049965484 container died c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:38:02 np0005539508 python3.9[209475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47-merged.mount: Deactivated successfully.
Nov 29 01:38:02 np0005539508 podman[209436]: 2025-11-29 06:38:02.495623437 +0000 UTC m=+0.578955826 container remove c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:38:02 np0005539508 systemd[1]: libpod-conmon-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope: Deactivated successfully.
Nov 29 01:38:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:38:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:02 np0005539508 podman[209481]: 2025-11-29 06:38:02.590214697 +0000 UTC m=+0.247859478 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:38:02 np0005539508 podman[209482]: 2025-11-29 06:38:02.619241121 +0000 UTC m=+0.277531681 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 01:38:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:03 np0005539508 python3.9[209675]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 01:38:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:05.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:38:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:38:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:38:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 44377ac0-a2d7-4050-a55a-b2b0f3957b55 does not exist
Nov 29 01:38:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 55ae6183-02c1-49f6-92b1-b2971b18711e does not exist
Nov 29 01:38:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a0886498-4329-440a-8592-04949dcdb8b2 does not exist
Nov 29 01:38:06 np0005539508 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 01:38:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:07.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:38:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:07.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:08 np0005539508 python3.9[209883]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:09 np0005539508 python3.9[210086]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:10 np0005539508 python3.9[210238]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:11 np0005539508 python3.9[210392]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:11.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:38:11 np0005539508 python3.9[210545]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:38:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:38:12 np0005539508 python3.9[210697]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:38:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:38:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:13 np0005539508 python3.9[210850]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:13.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:13 np0005539508 python3.9[211002]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:14 np0005539508 python3.9[211154]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:15 np0005539508 python3.9[211307]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:15.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:16 np0005539508 python3.9[211461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:38:16 np0005539508 systemd[1]: Reloading.
Nov 29 01:38:16 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:38:16 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:38:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:16 np0005539508 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 01:38:16 np0005539508 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 01:38:16 np0005539508 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 01:38:16 np0005539508 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 01:38:16 np0005539508 systemd[1]: Starting libvirt logging daemon...
Nov 29 01:38:16 np0005539508 systemd[1]: Started libvirt logging daemon.
Nov 29 01:38:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.224 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:38:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:38:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:38:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:17 np0005539508 python3.9[211655]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:38:17 np0005539508 systemd[1]: Reloading.
Nov 29 01:38:17 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:38:17 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:38:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:17 np0005539508 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 01:38:17 np0005539508 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 01:38:17 np0005539508 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 01:38:17 np0005539508 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 01:38:17 np0005539508 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 01:38:17 np0005539508 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 01:38:17 np0005539508 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 01:38:17 np0005539508 systemd[1]: Started libvirt nodedev daemon.
Nov 29 01:38:18 np0005539508 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 01:38:18 np0005539508 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 01:38:18 np0005539508 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 01:38:18 np0005539508 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 01:38:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:19 np0005539508 python3.9[211882]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:38:19 np0005539508 systemd[1]: Reloading.
Nov 29 01:38:19 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:38:19 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:38:19 np0005539508 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bda634-c859-4153-984e-4815756e6df6
Nov 29 01:38:19 np0005539508 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 01:38:19 np0005539508 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bda634-c859-4153-984e-4815756e6df6
Nov 29 01:38:19 np0005539508 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 01:38:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:20 np0005539508 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 01:38:20 np0005539508 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 01:38:20 np0005539508 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 01:38:20 np0005539508 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 01:38:20 np0005539508 systemd[1]: Starting libvirt proxy daemon...
Nov 29 01:38:20 np0005539508 systemd[1]: Started libvirt proxy daemon.
Nov 29 01:38:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:21 np0005539508 python3.9[212097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:38:21 np0005539508 systemd[1]: Reloading.
Nov 29 01:38:21 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:38:21 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:38:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:22 np0005539508 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 01:38:22 np0005539508 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 01:38:22 np0005539508 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 01:38:22 np0005539508 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 01:38:22 np0005539508 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 01:38:22 np0005539508 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 01:38:22 np0005539508 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 01:38:22 np0005539508 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 01:38:22 np0005539508 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 01:38:22 np0005539508 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 01:38:22 np0005539508 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 01:38:22 np0005539508 systemd[1]: Started libvirt QEMU daemon.
Nov 29 01:38:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:23 np0005539508 python3.9[212312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:38:23 np0005539508 systemd[1]: Reloading.
Nov 29 01:38:23 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:38:23 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:38:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:23 np0005539508 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 01:38:23 np0005539508 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 01:38:23 np0005539508 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 01:38:23 np0005539508 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 01:38:23 np0005539508 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 01:38:23 np0005539508 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 01:38:23 np0005539508 systemd[1]: Starting libvirt secret daemon...
Nov 29 01:38:23 np0005539508 systemd[1]: Started libvirt secret daemon.
Nov 29 01:38:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:23.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:24 np0005539508 python3.9[212525]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:25 np0005539508 python3.9[212678]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:38:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.003000087s ======
Nov 29 01:38:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:25.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Nov 29 01:38:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:26 np0005539508 python3.9[212830]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:28 np0005539508 python3.9[212987]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:38:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:29 np0005539508 python3.9[213161]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:38:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:38:29 np0005539508 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 01:38:29 np0005539508 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 01:38:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:29.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:30 np0005539508 python3.9[213310]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398308.8015358-3367-245766001101515/.source.xml follow=False _original_basename=secret.xml.j2 checksum=63744b3abb892aaab98ed7226f328ffc66ff66bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:30 np0005539508 python3.9[213462]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 336ec58c-893b-528f-a0c1-6ed1196bc047#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:31 np0005539508 python3.9[213625]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:33 np0005539508 podman[213903]: 2025-11-29 06:38:33.093142885 +0000 UTC m=+0.096484366 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 01:38:33 np0005539508 podman[213904]: 2025-11-29 06:38:33.129096801 +0000 UTC m=+0.120442353 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 01:38:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:34 np0005539508 python3.9[214133]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:34 np0005539508 python3.9[214285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:35 np0005539508 python3.9[214409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398314.4051373-3532-62904564504235/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:38:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.8 total, 600.0 interval#012Cumulative writes: 8512 writes, 34K keys, 8512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8512 writes, 1746 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 628 writes, 988 keys, 628 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s#012Interval WAL: 628 writes, 295 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 29 01:38:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:38:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:38:36 np0005539508 python3.9[214561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:37 np0005539508 python3.9[214714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:37 np0005539508 python3.9[214792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:38 np0005539508 python3.9[214946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:38 np0005539508 python3.9[215024]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x1jkd2id recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:39 np0005539508 python3.9[215177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:40 np0005539508 python3.9[215255]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:41 np0005539508 python3.9[215408]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 01:38:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 01:38:42 np0005539508 python3[215563]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 01:38:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:43 np0005539508 python3.9[215716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:43.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:44 np0005539508 python3.9[215796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:44 np0005539508 python3.9[215952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:45 np0005539508 python3.9[216031]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:46 np0005539508 python3.9[216183]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:46 np0005539508 python3.9[216261]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:47 np0005539508 python3.9[216414]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:48 np0005539508 python3.9[216492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:49 np0005539508 python3.9[216644]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:49 np0005539508 python3.9[216820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398328.3904643-3907-249409543387922/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:50 np0005539508 python3.9[216972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:51 np0005539508 python3.9[217127]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:52 np0005539508 python3.9[217282]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:53 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 01:38:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:53 np0005539508 python3.9[217435]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:38:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:38:54 np0005539508 python3.9[217588]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:38:54
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'images', '.mgr']
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:38:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:55 np0005539508 python3.9[217742]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:38:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:55 np0005539508 python3.9[217898]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:56 np0005539508 python3.9[218050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:57 np0005539508 python3.9[218174]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398336.1421137-4123-241530808499150/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:38:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:38:58 np0005539508 python3.9[218326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:38:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:38:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:38:59 np0005539508 python3.9[218449]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398337.8909786-4168-137832301099940/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:38:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:38:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:38:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:38:59 np0005539508 python3.9[218602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:39:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:00 np0005539508 python3.9[218727]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398339.3047419-4213-168083904604660/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:01 np0005539508 python3.9[218880]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:39:01 np0005539508 systemd[1]: Reloading.
Nov 29 01:39:01 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:39:01 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:39:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:39:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:01.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:39:02 np0005539508 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 01:39:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:03 np0005539508 python3.9[219070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 01:39:03 np0005539508 systemd[1]: Reloading.
Nov 29 01:39:03 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:39:03 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:39:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:03 np0005539508 systemd[1]: Reloading.
Nov 29 01:39:03 np0005539508 podman[219108]: 2025-11-29 06:39:03.453385426 +0000 UTC m=+0.061686585 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 01:39:03 np0005539508 podman[219109]: 2025-11-29 06:39:03.478501339 +0000 UTC m=+0.093179981 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 01:39:03 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:39:03 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:39:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:03.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:04 np0005539508 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 01:39:04 np0005539508 systemd[1]: session-49.scope: Consumed 3min 48.229s CPU time.
Nov 29 01:39:04 np0005539508 systemd-logind[797]: Session 49 logged out. Waiting for processes to exit.
Nov 29 01:39:04 np0005539508 systemd-logind[797]: Removed session 49.
Nov 29 01:39:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:39:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:39:08 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:39:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:09 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:09 np0005539508 systemd-logind[797]: New session 50 of user zuul.
Nov 29 01:39:10 np0005539508 systemd[1]: Started Session 50 of User zuul.
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 1c1e8f87-0fcd-4288-8e35-32cfc1289060 does not exist
Nov 29 01:39:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 8c7202fb-4aa0-4419-80ad-f33df4d20ca5 does not exist
Nov 29 01:39:10 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 09f86b8d-9541-4d4c-adb7-5a60fd836bee does not exist
Nov 29 01:39:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:39:10 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:39:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:11 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:39:11 np0005539508 python3.9[219684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:39:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:11 np0005539508 podman[219817]: 2025-11-29 06:39:11.420189894 +0000 UTC m=+0.020507651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:11 np0005539508 podman[219817]: 2025-11-29 06:39:11.685217246 +0000 UTC m=+0.285535013 container create 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:39:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:12 np0005539508 systemd[1]: Started libpod-conmon-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope.
Nov 29 01:39:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:12 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:39:12 np0005539508 python3.9[219985]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:39:12 np0005539508 network[220002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:39:12 np0005539508 network[220003]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:39:12 np0005539508 network[220004]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:39:12 np0005539508 podman[219817]: 2025-11-29 06:39:12.936469233 +0000 UTC m=+1.536787000 container init 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:39:12 np0005539508 podman[219817]: 2025-11-29 06:39:12.948663934 +0000 UTC m=+1.548981701 container start 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:39:12 np0005539508 trusting_lamarr[219909]: 167 167
Nov 29 01:39:12 np0005539508 systemd[1]: libpod-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope: Deactivated successfully.
Nov 29 01:39:12 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:39:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:39:13 np0005539508 podman[219817]: 2025-11-29 06:39:13.050304097 +0000 UTC m=+1.650621844 container attach 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:39:13 np0005539508 podman[219817]: 2025-11-29 06:39:13.051782749 +0000 UTC m=+1.652100516 container died 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:39:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:13.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1d79babae9b4010d393d4bb81fb88263c3811ec6a393b7652914ec5625606f99-merged.mount: Deactivated successfully.
Nov 29 01:39:13 np0005539508 podman[219817]: 2025-11-29 06:39:13.945861493 +0000 UTC m=+2.546179230 container remove 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:39:13 np0005539508 systemd[1]: libpod-conmon-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope: Deactivated successfully.
Nov 29 01:39:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:14 np0005539508 podman[220051]: 2025-11-29 06:39:14.165450109 +0000 UTC m=+0.053637064 container create 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:39:14 np0005539508 systemd[1]: Started libpod-conmon-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope.
Nov 29 01:39:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:14 np0005539508 podman[220051]: 2025-11-29 06:39:14.135478537 +0000 UTC m=+0.023665522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:14 np0005539508 podman[220051]: 2025-11-29 06:39:14.580553087 +0000 UTC m=+0.468740102 container init 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:39:14 np0005539508 podman[220051]: 2025-11-29 06:39:14.5910732 +0000 UTC m=+0.479260165 container start 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:39:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:14 np0005539508 podman[220051]: 2025-11-29 06:39:14.74654215 +0000 UTC m=+0.634729115 container attach 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:39:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:15.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:15 np0005539508 optimistic_babbage[220071]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:39:15 np0005539508 optimistic_babbage[220071]: --> relative data size: 1.0
Nov 29 01:39:15 np0005539508 optimistic_babbage[220071]: --> All data devices are unavailable
Nov 29 01:39:15 np0005539508 systemd[1]: libpod-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope: Deactivated successfully.
Nov 29 01:39:15 np0005539508 podman[220051]: 2025-11-29 06:39:15.438162682 +0000 UTC m=+1.326349657 container died 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:39:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b-merged.mount: Deactivated successfully.
Nov 29 01:39:15 np0005539508 podman[220051]: 2025-11-29 06:39:15.512801128 +0000 UTC m=+1.400988093 container remove 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:39:15 np0005539508 systemd[1]: libpod-conmon-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope: Deactivated successfully.
Nov 29 01:39:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.087439355 +0000 UTC m=+0.018939846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.279498699 +0000 UTC m=+0.210999210 container create 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:39:16 np0005539508 systemd[1]: Started libpod-conmon-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope.
Nov 29 01:39:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.374374827 +0000 UTC m=+0.305875408 container init 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.380625517 +0000 UTC m=+0.312126028 container start 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.385467676 +0000 UTC m=+0.316968147 container attach 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:39:16 np0005539508 reverent_pare[220371]: 167 167
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.390489281 +0000 UTC m=+0.321989742 container died 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:39:16 np0005539508 systemd[1]: libpod-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope: Deactivated successfully.
Nov 29 01:39:16 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5774b7570e26ea6369989f781e66a727a7c67e3603efa4ae59b63486b9ef6a4a-merged.mount: Deactivated successfully.
Nov 29 01:39:16 np0005539508 podman[220337]: 2025-11-29 06:39:16.429827282 +0000 UTC m=+0.361327773 container remove 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:39:16 np0005539508 systemd[1]: libpod-conmon-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope: Deactivated successfully.
Nov 29 01:39:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:16 np0005539508 podman[220394]: 2025-11-29 06:39:16.632439429 +0000 UTC m=+0.058865154 container create 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:39:16 np0005539508 systemd[1]: Started libpod-conmon-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope.
Nov 29 01:39:16 np0005539508 podman[220394]: 2025-11-29 06:39:16.604479105 +0000 UTC m=+0.030904920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:16 np0005539508 podman[220394]: 2025-11-29 06:39:16.718174425 +0000 UTC m=+0.144600210 container init 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:39:16 np0005539508 podman[220394]: 2025-11-29 06:39:16.725925978 +0000 UTC m=+0.152351713 container start 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 01:39:16 np0005539508 podman[220394]: 2025-11-29 06:39:16.729762669 +0000 UTC m=+0.156188424 container attach 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 01:39:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:39:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.227 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:39:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.228 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:39:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:17.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]: {
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:    "1": [
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:        {
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "devices": [
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "/dev/loop3"
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            ],
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "lv_name": "ceph_lv0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "lv_size": "7511998464",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "name": "ceph_lv0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "tags": {
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.cluster_name": "ceph",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.crush_device_class": "",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.encrypted": "0",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.osd_id": "1",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.type": "block",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:                "ceph.vdo": "0"
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            },
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "type": "block",
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:            "vg_name": "ceph_vg0"
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:        }
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]:    ]
Nov 29 01:39:17 np0005539508 adoring_archimedes[220411]: }
Nov 29 01:39:17 np0005539508 systemd[1]: libpod-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope: Deactivated successfully.
Nov 29 01:39:17 np0005539508 podman[220394]: 2025-11-29 06:39:17.629715622 +0000 UTC m=+1.056141357 container died 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 01:39:17 np0005539508 systemd[1]: var-lib-containers-storage-overlay-556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef-merged.mount: Deactivated successfully.
Nov 29 01:39:17 np0005539508 python3.9[220546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 01:39:17 np0005539508 podman[220394]: 2025-11-29 06:39:17.926691133 +0000 UTC m=+1.353116868 container remove 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:39:17 np0005539508 systemd[1]: libpod-conmon-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope: Deactivated successfully.
Nov 29 01:39:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.523425304 +0000 UTC m=+0.047355653 container create b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:39:18 np0005539508 systemd[1]: Started libpod-conmon-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope.
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.500544106 +0000 UTC m=+0.024474535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:18 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.797157117 +0000 UTC m=+0.321087506 container init b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.805462526 +0000 UTC m=+0.329392875 container start b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.808816212 +0000 UTC m=+0.332746611 container attach b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:39:18 np0005539508 crazy_hamilton[220802]: 167 167
Nov 29 01:39:18 np0005539508 systemd[1]: libpod-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope: Deactivated successfully.
Nov 29 01:39:18 np0005539508 podman[220740]: 2025-11-29 06:39:18.81258952 +0000 UTC m=+0.336519879 container died b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:39:18 np0005539508 python3.9[220805]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:39:18 np0005539508 systemd[1]: var-lib-containers-storage-overlay-dfd6f50ceb39d6170bd154b5a30e84ff2b74571ed9e42e97fea133d71107f5a4-merged.mount: Deactivated successfully.
Nov 29 01:39:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:19.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:19 np0005539508 podman[220740]: 2025-11-29 06:39:19.468996329 +0000 UTC m=+0.992926678 container remove b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:39:19 np0005539508 systemd[1]: libpod-conmon-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope: Deactivated successfully.
Nov 29 01:39:19 np0005539508 podman[220831]: 2025-11-29 06:39:19.682792598 +0000 UTC m=+0.042574776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:39:19 np0005539508 podman[220831]: 2025-11-29 06:39:19.776102331 +0000 UTC m=+0.135884489 container create 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:39:19 np0005539508 systemd[1]: Started libpod-conmon-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope.
Nov 29 01:39:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:39:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:39:19 np0005539508 podman[220831]: 2025-11-29 06:39:19.875619014 +0000 UTC m=+0.235401222 container init 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:39:19 np0005539508 podman[220831]: 2025-11-29 06:39:19.885372564 +0000 UTC m=+0.245154722 container start 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:39:19 np0005539508 podman[220831]: 2025-11-29 06:39:19.889443351 +0000 UTC m=+0.249225559 container attach 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:39:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]: {
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:        "osd_id": 1,
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:        "type": "bluestore"
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]:    }
Nov 29 01:39:20 np0005539508 practical_montalcini[220847]: }
Nov 29 01:39:20 np0005539508 systemd[1]: libpod-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope: Deactivated successfully.
Nov 29 01:39:20 np0005539508 podman[220868]: 2025-11-29 06:39:20.905462192 +0000 UTC m=+0.112736123 container died 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:39:20 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2-merged.mount: Deactivated successfully.
Nov 29 01:39:20 np0005539508 podman[220868]: 2025-11-29 06:39:20.959247869 +0000 UTC m=+0.166521790 container remove 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:39:20 np0005539508 systemd[1]: libpod-conmon-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope: Deactivated successfully.
Nov 29 01:39:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:39:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:39:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 62c182f2-b963-4012-9e6b-cef7e1de1a97 does not exist
Nov 29 01:39:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b9f0f398-0675-4a5a-896a-d50d9dc046dc does not exist
Nov 29 01:39:21 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a6eb3983-84e2-435e-aa31-6aa0ab4e034a does not exist
Nov 29 01:39:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:21.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:39:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:25 np0005539508 python3.9[221088]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:39:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:26 np0005539508 python3.9[221240]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:39:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:27 np0005539508 python3.9[221393]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:39:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:27 np0005539508 python3.9[221546]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:39:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:28 np0005539508 python3.9[221699]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:39:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.065545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369065657, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1352, "num_deletes": 250, "total_data_size": 2476152, "memory_usage": 2503320, "flush_reason": "Manual Compaction"}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369080089, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1461897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13139, "largest_seqno": 14490, "table_properties": {"data_size": 1457009, "index_size": 2284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12208, "raw_average_key_size": 20, "raw_value_size": 1446470, "raw_average_value_size": 2402, "num_data_blocks": 103, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398218, "oldest_key_time": 1764398218, "file_creation_time": 1764398369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 14855 microseconds, and 7919 cpu microseconds.
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.080400) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1461897 bytes OK
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.080519) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082785) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082808) EVENT_LOG_v1 {"time_micros": 1764398369082800, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2470278, prev total WAL file size 2470278, number of live WAL files 2.
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.084721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323533' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1427KB)], [29(10MB)]
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369084808, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12071523, "oldest_snapshot_seqno": -1}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4610 keys, 9170082 bytes, temperature: kUnknown
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369140596, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 9170082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9136987, "index_size": 20441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 112467, "raw_average_key_size": 24, "raw_value_size": 9051467, "raw_average_value_size": 1963, "num_data_blocks": 883, "num_entries": 4610, "num_filter_entries": 4610, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.140868) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9170082 bytes
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.142247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.0 rd, 164.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(14.5) write-amplify(6.3) OK, records in: 5063, records dropped: 453 output_compression: NoCompression
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.142263) EVENT_LOG_v1 {"time_micros": 1764398369142256, "job": 12, "event": "compaction_finished", "compaction_time_micros": 55897, "compaction_time_cpu_micros": 20121, "output_level": 6, "num_output_files": 1, "total_output_size": 9170082, "num_input_records": 5063, "num_output_records": 4610, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369142559, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369144089, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.084601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:39:29 np0005539508 python3.9[221823]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398367.927275-250-113594569056087/.source.iscsi _original_basename=.97aetkex follow=False checksum=91783c1b2b0f473e0aa10089b38d8c6438a20bbb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:39:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:39:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:30 np0005539508 python3.9[222025]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:31 np0005539508 python3.9[222177]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:32 np0005539508 python3.9[222330]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:39:32 np0005539508 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 01:39:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:33 np0005539508 python3.9[222489]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:39:33 np0005539508 systemd[1]: Reloading.
Nov 29 01:39:33 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:39:33 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:39:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:34 np0005539508 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 01:39:34 np0005539508 systemd[1]: Starting Open-iSCSI...
Nov 29 01:39:34 np0005539508 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 01:39:34 np0005539508 systemd[1]: Started Open-iSCSI.
Nov 29 01:39:34 np0005539508 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 01:39:34 np0005539508 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 01:39:34 np0005539508 podman[222527]: 2025-11-29 06:39:34.501923049 +0000 UTC m=+0.090142483 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 01:39:34 np0005539508 podman[222528]: 2025-11-29 06:39:34.526726412 +0000 UTC m=+0.114051041 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Nov 29 01:39:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:35 np0005539508 python3.9[222729]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:39:35 np0005539508 network[222746]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:39:35 np0005539508 network[222747]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:39:35 np0005539508 network[222748]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:39:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:36.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:39.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:40 np0005539508 python3.9[223024]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 01:39:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:45 np0005539508 python3.9[223181]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 01:39:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:47 np0005539508 python3.9[223337]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:39:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:47 np0005539508 python3.9[223463]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398386.432603-481-16335278378920/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:49 np0005539508 python3.9[223642]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:39:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:39:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:39:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:52.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:39:54
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:39:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:39:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:57 np0005539508 python3.9[223825]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:39:57 np0005539508 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 01:39:57 np0005539508 systemd[1]: Stopped Load Kernel Modules.
Nov 29 01:39:57 np0005539508 systemd[1]: Stopping Load Kernel Modules...
Nov 29 01:39:57 np0005539508 systemd[1]: Starting Load Kernel Modules...
Nov 29 01:39:57 np0005539508 systemd[1]: Finished Load Kernel Modules.
Nov 29 01:39:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:39:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:39:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:39:58 np0005539508 python3.9[223981]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:39:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:39:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:39:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:39:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:59.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:40:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:00 np0005539508 python3.9[224136]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:01 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:40:01 np0005539508 python3.9[224291]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:02.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:02 np0005539508 python3.9[224443]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:03 np0005539508 python3.9[224567]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398401.477998-655-188314749192914/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:04 np0005539508 python3.9[224719]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:40:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:05 np0005539508 podman[224845]: 2025-11-29 06:40:05.029089116 +0000 UTC m=+0.060686466 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:05 np0005539508 podman[224846]: 2025-11-29 06:40:05.060022346 +0000 UTC m=+0.086196320 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 01:40:05 np0005539508 python3.9[224909]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:06 np0005539508 python3.9[225068]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:07 np0005539508 python3.9[225223]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:08 np0005539508 python3.9[225375]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:09 np0005539508 python3.9[225528]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:10 np0005539508 python3.9[225730]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:11 np0005539508 python3.9[225883]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:12 np0005539508 python3.9[226035]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:12 np0005539508 python3.9[226189]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:40:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:40:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:14 np0005539508 python3.9[226342]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:40:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:40:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:14 np0005539508 python3.9[226494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:15 np0005539508 python3.9[226573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:16.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:16 np0005539508 python3.9[226725]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:16 np0005539508 python3.9[226803]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.227 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:40:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.229 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:40:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.229 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:40:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:40:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:17.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:40:17 np0005539508 python3.9[226956]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:18 np0005539508 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 01:40:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:18 np0005539508 python3.9[227109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:19 np0005539508 python3.9[227187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:19 np0005539508 python3.9[227340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:20 np0005539508 python3.9[227418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:20 np0005539508 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 01:40:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:21 np0005539508 python3.9[227574]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:40:21 np0005539508 systemd[1]: Reloading.
Nov 29 01:40:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:21 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:40:21 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:40:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:40:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 01:40:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:40:22 np0005539508 python3.9[227882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:40:23 np0005539508 python3.9[227975]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:23 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 97f8f56f-a14c-475e-8c5b-f7fa59061626 does not exist
Nov 29 01:40:23 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 917418f7-2d9f-4a8a-bb67-4def966e263e does not exist
Nov 29 01:40:23 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 1265e801-6934-4613-957d-2ac11acee9a5 does not exist
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:40:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:40:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:24 np0005539508 podman[228239]: 2025-11-29 06:40:24.03989563 +0000 UTC m=+0.024083413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:24 np0005539508 podman[228239]: 2025-11-29 06:40:24.187860096 +0000 UTC m=+0.172047859 container create 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:24 np0005539508 python3.9[228282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:24.338442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398424338478, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 676, "num_deletes": 252, "total_data_size": 932655, "memory_usage": 944960, "flush_reason": "Manual Compaction"}
Nov 29 01:40:24 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 01:40:24 np0005539508 systemd[1]: Started libpod-conmon-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope.
Nov 29 01:40:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425005737, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 924399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14491, "largest_seqno": 15166, "table_properties": {"data_size": 920862, "index_size": 1381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 6914, "raw_average_key_size": 16, "raw_value_size": 913841, "raw_average_value_size": 2191, "num_data_blocks": 63, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398369, "oldest_key_time": 1764398369, "file_creation_time": 1764398424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 667387 microseconds, and 3184 cpu microseconds.
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:40:25 np0005539508 python3.9[228365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.005821) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 924399 bytes OK
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.005847) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188668) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188739) EVENT_LOG_v1 {"time_micros": 1764398425188729, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 929159, prev total WAL file size 933954, number of live WAL files 2.
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.189438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(902KB)], [32(8955KB)]
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425189627, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10094481, "oldest_snapshot_seqno": -1}
Nov 29 01:40:25 np0005539508 podman[228239]: 2025-11-29 06:40:25.41887977 +0000 UTC m=+1.403067603 container init 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:40:25 np0005539508 podman[228239]: 2025-11-29 06:40:25.42931719 +0000 UTC m=+1.413504933 container start 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:40:25 np0005539508 priceless_pasteur[228310]: 167 167
Nov 29 01:40:25 np0005539508 systemd[1]: libpod-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope: Deactivated successfully.
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4510 keys, 9525503 bytes, temperature: kUnknown
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425454367, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 9525503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9492641, "index_size": 20464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112118, "raw_average_key_size": 24, "raw_value_size": 9408376, "raw_average_value_size": 2086, "num_data_blocks": 864, "num_entries": 4510, "num_filter_entries": 4510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398425, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:40:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.454747) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 9525503 bytes
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.518733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.1 rd, 36.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(21.2) write-amplify(10.3) OK, records in: 5027, records dropped: 517 output_compression: NoCompression
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.518785) EVENT_LOG_v1 {"time_micros": 1764398425518772, "job": 14, "event": "compaction_finished", "compaction_time_micros": 264907, "compaction_time_cpu_micros": 25766, "output_level": 6, "num_output_files": 1, "total_output_size": 9525503, "num_input_records": 5027, "num_output_records": 4510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425519063, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425520412, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.189324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:40:25 np0005539508 podman[228239]: 2025-11-29 06:40:25.55795746 +0000 UTC m=+1.542145243 container attach 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:40:25 np0005539508 podman[228239]: 2025-11-29 06:40:25.559124524 +0000 UTC m=+1.543312287 container died 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:40:26 np0005539508 python3.9[228529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:40:26 np0005539508 systemd[1]: Reloading.
Nov 29 01:40:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:26 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:40:26 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:40:26 np0005539508 systemd[1]: Starting Create netns directory...
Nov 29 01:40:26 np0005539508 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 01:40:26 np0005539508 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 01:40:26 np0005539508 systemd[1]: Finished Create netns directory.
Nov 29 01:40:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e61b0a30fde40895470d6e92d3a30c540c196a598f43ee9d609b105b1abaf0a7-merged.mount: Deactivated successfully.
Nov 29 01:40:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:27 np0005539508 podman[228239]: 2025-11-29 06:40:27.475694854 +0000 UTC m=+3.459882597 container remove 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:27 np0005539508 systemd[1]: libpod-conmon-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope: Deactivated successfully.
Nov 29 01:40:27 np0005539508 python3.9[228725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:27 np0005539508 podman[228731]: 2025-11-29 06:40:27.642006127 +0000 UTC m=+0.023479916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:27 np0005539508 podman[228731]: 2025-11-29 06:40:27.779337167 +0000 UTC m=+0.160810936 container create be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:40:28 np0005539508 systemd[1]: Started libpod-conmon-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope.
Nov 29 01:40:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:28 np0005539508 podman[228731]: 2025-11-29 06:40:28.397190187 +0000 UTC m=+0.778663966 container init be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:28 np0005539508 podman[228731]: 2025-11-29 06:40:28.411795927 +0000 UTC m=+0.793269696 container start be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:40:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:28 np0005539508 python3.9[228903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:29 np0005539508 podman[228731]: 2025-11-29 06:40:29.103695176 +0000 UTC m=+1.485168965 container attach be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:40:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:29 np0005539508 recursing_faraday[228796]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:40:29 np0005539508 recursing_faraday[228796]: --> relative data size: 1.0
Nov 29 01:40:29 np0005539508 recursing_faraday[228796]: --> All data devices are unavailable
Nov 29 01:40:29 np0005539508 systemd[1]: libpod-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope: Deactivated successfully.
Nov 29 01:40:29 np0005539508 podman[228731]: 2025-11-29 06:40:29.300834466 +0000 UTC m=+1.682308265 container died be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:40:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:40:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:40:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:40:29 np0005539508 python3.9[229048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398428.0541244-1276-73338465434031/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:30.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:30 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748-merged.mount: Deactivated successfully.
Nov 29 01:40:30 np0005539508 python3.9[229251]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:40:31 np0005539508 podman[228731]: 2025-11-29 06:40:31.153048035 +0000 UTC m=+3.534521834 container remove be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:40:31 np0005539508 systemd[1]: libpod-conmon-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope: Deactivated successfully.
Nov 29 01:40:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:31 np0005539508 python3.9[229506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.815654052 +0000 UTC m=+0.041489964 container create 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:40:31 np0005539508 systemd[1]: Started libpod-conmon-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope.
Nov 29 01:40:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.795503403 +0000 UTC m=+0.021339295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.898452104 +0000 UTC m=+0.124288066 container init 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.90527668 +0000 UTC m=+0.131112592 container start 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:31 np0005539508 nifty_cannon[229611]: 167 167
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.911359355 +0000 UTC m=+0.137195267 container attach 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:31 np0005539508 systemd[1]: libpod-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope: Deactivated successfully.
Nov 29 01:40:31 np0005539508 conmon[229611]: conmon 7e094777e910dc423ff8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope/container/memory.events
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.913542638 +0000 UTC m=+0.139378550 container died 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:40:31 np0005539508 systemd[1]: var-lib-containers-storage-overlay-521f917365407235f33703f6d5055f307459a7544f5d75fc14e95e04fc3fc8cc-merged.mount: Deactivated successfully.
Nov 29 01:40:31 np0005539508 podman[229571]: 2025-11-29 06:40:31.962020382 +0000 UTC m=+0.187856264 container remove 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:40:31 np0005539508 systemd[1]: libpod-conmon-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope: Deactivated successfully.
Nov 29 01:40:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:32.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:32 np0005539508 podman[229706]: 2025-11-29 06:40:32.158374739 +0000 UTC m=+0.044333226 container create e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:40:32 np0005539508 systemd[1]: Started libpod-conmon-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope.
Nov 29 01:40:32 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:32 np0005539508 podman[229706]: 2025-11-29 06:40:32.138875238 +0000 UTC m=+0.024833745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:32 np0005539508 podman[229706]: 2025-11-29 06:40:32.243774825 +0000 UTC m=+0.129733342 container init e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:40:32 np0005539508 podman[229706]: 2025-11-29 06:40:32.252202998 +0000 UTC m=+0.138161485 container start e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:40:32 np0005539508 podman[229706]: 2025-11-29 06:40:32.256575463 +0000 UTC m=+0.142533970 container attach e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:40:32 np0005539508 python3.9[229715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398431.1768239-1351-92268449728319/.source.json _original_basename=.zmezc377 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:33 np0005539508 amazing_wu[229725]: {
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:    "1": [
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:        {
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "devices": [
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "/dev/loop3"
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            ],
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "lv_name": "ceph_lv0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "lv_size": "7511998464",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "name": "ceph_lv0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "tags": {
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.cluster_name": "ceph",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.crush_device_class": "",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.encrypted": "0",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.osd_id": "1",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.type": "block",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:                "ceph.vdo": "0"
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            },
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "type": "block",
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:            "vg_name": "ceph_vg0"
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:        }
Nov 29 01:40:33 np0005539508 amazing_wu[229725]:    ]
Nov 29 01:40:33 np0005539508 amazing_wu[229725]: }
Nov 29 01:40:33 np0005539508 systemd[1]: libpod-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope: Deactivated successfully.
Nov 29 01:40:33 np0005539508 podman[229706]: 2025-11-29 06:40:33.044487044 +0000 UTC m=+0.930445571 container died e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:33 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf-merged.mount: Deactivated successfully.
Nov 29 01:40:33 np0005539508 python3.9[229881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:33 np0005539508 podman[229706]: 2025-11-29 06:40:33.12640524 +0000 UTC m=+1.012363747 container remove e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:40:33 np0005539508 systemd[1]: libpod-conmon-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope: Deactivated successfully.
Nov 29 01:40:33 np0005539508 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 01:40:33 np0005539508 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 01:40:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.805593483 +0000 UTC m=+0.049097933 container create decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:40:33 np0005539508 systemd[1]: Started libpod-conmon-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope.
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.78357439 +0000 UTC m=+0.027078830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.89938801 +0000 UTC m=+0.142892440 container init decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.908396669 +0000 UTC m=+0.151901119 container start decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.912514278 +0000 UTC m=+0.156018688 container attach decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 01:40:33 np0005539508 mystifying_stonebraker[230212]: 167 167
Nov 29 01:40:33 np0005539508 systemd[1]: libpod-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope: Deactivated successfully.
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.918814039 +0000 UTC m=+0.162318509 container died decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:40:33 np0005539508 systemd[1]: var-lib-containers-storage-overlay-20b004c71362bebcbb9ec09681dfe4f97429e9121f859388f7e265f9e734a1ad-merged.mount: Deactivated successfully.
Nov 29 01:40:33 np0005539508 podman[230194]: 2025-11-29 06:40:33.962737302 +0000 UTC m=+0.206241712 container remove decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:40:33 np0005539508 systemd[1]: libpod-conmon-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope: Deactivated successfully.
Nov 29 01:40:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:34.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:34 np0005539508 podman[230258]: 2025-11-29 06:40:34.136056657 +0000 UTC m=+0.049570277 container create fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:40:34 np0005539508 systemd[1]: Started libpod-conmon-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope.
Nov 29 01:40:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:34 np0005539508 podman[230258]: 2025-11-29 06:40:34.117231376 +0000 UTC m=+0.030745016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:40:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:34 np0005539508 podman[230258]: 2025-11-29 06:40:34.23351423 +0000 UTC m=+0.147027860 container init fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:40:34 np0005539508 podman[230258]: 2025-11-29 06:40:34.243851257 +0000 UTC m=+0.157364897 container start fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:40:34 np0005539508 podman[230258]: 2025-11-29 06:40:34.249689015 +0000 UTC m=+0.163202635 container attach fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:40:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]: {
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:        "osd_id": 1,
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:        "type": "bluestore"
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]:    }
Nov 29 01:40:35 np0005539508 xenodochial_bohr[230298]: }
Nov 29 01:40:35 np0005539508 systemd[1]: libpod-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope: Deactivated successfully.
Nov 29 01:40:35 np0005539508 podman[230419]: 2025-11-29 06:40:35.168256334 +0000 UTC m=+0.022446837 container died fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:40:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:40:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:40:35 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2-merged.mount: Deactivated successfully.
Nov 29 01:40:35 np0005539508 podman[230419]: 2025-11-29 06:40:35.829661786 +0000 UTC m=+0.683852279 container remove fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:40:35 np0005539508 systemd[1]: libpod-conmon-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope: Deactivated successfully.
Nov 29 01:40:35 np0005539508 podman[230420]: 2025-11-29 06:40:35.872543279 +0000 UTC m=+0.700405725 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:40:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:40:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:40:35 np0005539508 podman[230429]: 2025-11-29 06:40:35.905732414 +0000 UTC m=+0.734154346 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 01:40:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 29b23a40-97e4-4f65-9381-0ca0007f91ec does not exist
Nov 29 01:40:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6adbe004-1a6c-497c-b428-a391bc75fd3d does not exist
Nov 29 01:40:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 647ffd37-79e2-43fd-92cf-3361ba30d02f does not exist
Nov 29 01:40:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:37 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:40:37 np0005539508 python3.9[230660]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 01:40:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:38 np0005539508 python3.9[230812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 01:40:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:39 np0005539508 python3.9[230965]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 01:40:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:42 np0005539508 python3[231145]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 01:40:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:44 np0005539508 podman[231158]: 2025-11-29 06:40:44.924162705 +0000 UTC m=+2.816803193 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 01:40:45 np0005539508 podman[231217]: 2025-11-29 06:40:45.064274474 +0000 UTC m=+0.027692567 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 01:40:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:45 np0005539508 podman[231217]: 2025-11-29 06:40:45.754505446 +0000 UTC m=+0.717923459 container create 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 29 01:40:45 np0005539508 python3[231145]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 01:40:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:46 np0005539508 python3.9[231407]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:47 np0005539508 python3.9[231564]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:48.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:48 np0005539508 python3.9[231640]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:48 np0005539508 python3.9[231791]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764398448.2444339-1615-230093002591453/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:49 np0005539508 python3.9[231868]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:40:49 np0005539508 systemd[1]: Reloading.
Nov 29 01:40:49 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:40:49 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:40:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:50 np0005539508 python3.9[232002]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:40:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:50 np0005539508 systemd[1]: Reloading.
Nov 29 01:40:50 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:40:50 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:40:51 np0005539508 systemd[1]: Starting multipathd container...
Nov 29 01:40:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:51 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:51 np0005539508 systemd[1]: Started /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 01:40:51 np0005539508 podman[232069]: 2025-11-29 06:40:51.504010264 +0000 UTC m=+0.371242857 container init 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Nov 29 01:40:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:51 np0005539508 multipathd[232084]: + sudo -E kolla_set_configs
Nov 29 01:40:51 np0005539508 podman[232069]: 2025-11-29 06:40:51.536968382 +0000 UTC m=+0.404200885 container start 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:40:51 np0005539508 podman[232069]: multipathd
Nov 29 01:40:51 np0005539508 systemd[1]: Started multipathd container.
Nov 29 01:40:51 np0005539508 multipathd[232084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:40:51 np0005539508 multipathd[232084]: INFO:__main__:Validating config file
Nov 29 01:40:51 np0005539508 multipathd[232084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:40:51 np0005539508 multipathd[232084]: INFO:__main__:Writing out command to execute
Nov 29 01:40:51 np0005539508 multipathd[232084]: ++ cat /run_command
Nov 29 01:40:51 np0005539508 multipathd[232084]: + CMD='/usr/sbin/multipathd -d'
Nov 29 01:40:51 np0005539508 multipathd[232084]: + ARGS=
Nov 29 01:40:51 np0005539508 multipathd[232084]: + sudo kolla_copy_cacerts
Nov 29 01:40:51 np0005539508 podman[232091]: 2025-11-29 06:40:51.631148581 +0000 UTC m=+0.085547841 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 01:40:51 np0005539508 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 01:40:51 np0005539508 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.service: Failed with result 'exit-code'.
Nov 29 01:40:51 np0005539508 multipathd[232084]: + [[ ! -n '' ]]
Nov 29 01:40:51 np0005539508 multipathd[232084]: + . kolla_extend_start
Nov 29 01:40:51 np0005539508 multipathd[232084]: Running command: '/usr/sbin/multipathd -d'
Nov 29 01:40:51 np0005539508 multipathd[232084]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 01:40:51 np0005539508 multipathd[232084]: + umask 0022
Nov 29 01:40:51 np0005539508 multipathd[232084]: + exec /usr/sbin/multipathd -d
Nov 29 01:40:51 np0005539508 multipathd[232084]: 3901.902802 | --------start up--------
Nov 29 01:40:51 np0005539508 multipathd[232084]: 3901.902821 | read /etc/multipath.conf
Nov 29 01:40:51 np0005539508 multipathd[232084]: 3901.909830 | path checkers start up
Nov 29 01:40:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:53 np0005539508 python3.9[232272]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:40:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:53 np0005539508 python3.9[232427]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:40:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:40:54
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:40:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:54 np0005539508 python3.9[232592]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:40:54 np0005539508 systemd[1]: Stopping multipathd container...
Nov 29 01:40:55 np0005539508 multipathd[232084]: 3905.450144 | exit (signal)
Nov 29 01:40:55 np0005539508 multipathd[232084]: 3905.450295 | --------shut down-------
Nov 29 01:40:55 np0005539508 systemd[1]: libpod-843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.scope: Deactivated successfully.
Nov 29 01:40:55 np0005539508 podman[232596]: 2025-11-29 06:40:55.243369879 +0000 UTC m=+0.298079494 container died 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 01:40:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:55 np0005539508 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.timer: Deactivated successfully.
Nov 29 01:40:55 np0005539508 systemd[1]: Stopped /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 01:40:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-userdata-shm.mount: Deactivated successfully.
Nov 29 01:40:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b-merged.mount: Deactivated successfully.
Nov 29 01:40:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:40:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:40:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:57 np0005539508 podman[232596]: 2025-11-29 06:40:57.066196795 +0000 UTC m=+2.120906370 container cleanup 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 01:40:57 np0005539508 podman[232596]: multipathd
Nov 29 01:40:57 np0005539508 podman[232628]: multipathd
Nov 29 01:40:57 np0005539508 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 01:40:57 np0005539508 systemd[1]: Stopped multipathd container.
Nov 29 01:40:57 np0005539508 systemd[1]: Starting multipathd container...
Nov 29 01:40:57 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:40:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 01:40:57 np0005539508 systemd[1]: Started /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 01:40:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:40:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:40:57 np0005539508 podman[232641]: 2025-11-29 06:40:57.662820574 +0000 UTC m=+0.500497926 container init 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 01:40:57 np0005539508 multipathd[232656]: + sudo -E kolla_set_configs
Nov 29 01:40:57 np0005539508 podman[232641]: 2025-11-29 06:40:57.696595045 +0000 UTC m=+0.534272337 container start 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:40:57 np0005539508 podman[232641]: multipathd
Nov 29 01:40:57 np0005539508 systemd[1]: Started multipathd container.
Nov 29 01:40:57 np0005539508 multipathd[232656]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:40:57 np0005539508 multipathd[232656]: INFO:__main__:Validating config file
Nov 29 01:40:57 np0005539508 multipathd[232656]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:40:57 np0005539508 multipathd[232656]: INFO:__main__:Writing out command to execute
Nov 29 01:40:57 np0005539508 multipathd[232656]: ++ cat /run_command
Nov 29 01:40:57 np0005539508 multipathd[232656]: + CMD='/usr/sbin/multipathd -d'
Nov 29 01:40:57 np0005539508 multipathd[232656]: + ARGS=
Nov 29 01:40:57 np0005539508 multipathd[232656]: + sudo kolla_copy_cacerts
Nov 29 01:40:57 np0005539508 multipathd[232656]: + [[ ! -n '' ]]
Nov 29 01:40:57 np0005539508 multipathd[232656]: + . kolla_extend_start
Nov 29 01:40:57 np0005539508 multipathd[232656]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 01:40:57 np0005539508 multipathd[232656]: Running command: '/usr/sbin/multipathd -d'
Nov 29 01:40:57 np0005539508 multipathd[232656]: + umask 0022
Nov 29 01:40:57 np0005539508 multipathd[232656]: + exec /usr/sbin/multipathd -d
Nov 29 01:40:57 np0005539508 podman[232663]: 2025-11-29 06:40:57.833794181 +0000 UTC m=+0.124355967 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 01:40:57 np0005539508 multipathd[232656]: 3908.078309 | --------start up--------
Nov 29 01:40:57 np0005539508 multipathd[232656]: 3908.078330 | read /etc/multipath.conf
Nov 29 01:40:57 np0005539508 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-41be233bebe0a7b2.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 01:40:57 np0005539508 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-41be233bebe0a7b2.service: Failed with result 'exit-code'.
Nov 29 01:40:57 np0005539508 multipathd[232656]: 3908.083962 | path checkers start up
Nov 29 01:40:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:40:58 np0005539508 python3.9[232849]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:40:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:40:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:40:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:40:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:40:59 np0005539508 python3.9[233004]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 01:41:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:00 np0005539508 python3.9[233156]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 01:41:00 np0005539508 kernel: Key type psk registered
Nov 29 01:41:01 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 01:41:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:01 np0005539508 python3.9[233319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:41:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:02 np0005539508 python3.9[233442]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398461.168393-1855-112759495637085/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 01:41:03 np0005539508 python3.9[233595]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:04 np0005539508 python3.9[233747]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:41:04 np0005539508 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 01:41:04 np0005539508 systemd[1]: Stopped Load Kernel Modules.
Nov 29 01:41:04 np0005539508 systemd[1]: Stopping Load Kernel Modules...
Nov 29 01:41:04 np0005539508 systemd[1]: Starting Load Kernel Modules...
Nov 29 01:41:04 np0005539508 systemd[1]: Finished Load Kernel Modules.
Nov 29 01:41:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 01:41:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:05 np0005539508 python3.9[233904]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 01:41:06 np0005539508 podman[233906]: 2025-11-29 06:41:06.150149289 +0000 UTC m=+0.097821405 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 01:41:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:06.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:06 np0005539508 podman[233907]: 2025-11-29 06:41:06.214248681 +0000 UTC m=+0.161930358 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 01:41:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Nov 29 01:41:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:41:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:41:08 np0005539508 systemd[1]: Reloading.
Nov 29 01:41:08 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:41:08 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:41:08 np0005539508 systemd[1]: Reloading.
Nov 29 01:41:08 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:41:08 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:41:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 01:41:09 np0005539508 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:41:09 np0005539508 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:41:09 np0005539508 lvm[234062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:41:09 np0005539508 lvm[234062]: VG ceph_vg0 finished
Nov 29 01:41:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:09 np0005539508 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:41:09 np0005539508 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:41:09 np0005539508 systemd[1]: Reloading.
Nov 29 01:41:09 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:41:09 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:41:09 np0005539508 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:41:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 01:41:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:11 np0005539508 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:41:11 np0005539508 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:41:11 np0005539508 systemd[1]: man-db-cache-update.service: Consumed 1.583s CPU time.
Nov 29 01:41:11 np0005539508 systemd[1]: run-r6ee5c51bd5404b4996ca3bf7ce05adef.service: Deactivated successfully.
Nov 29 01:41:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:41:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:41:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:13 np0005539508 python3.9[235456]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:41:14 np0005539508 systemd[1]: Stopping Open-iSCSI...
Nov 29 01:41:14 np0005539508 iscsid[222530]: iscsid shutting down.
Nov 29 01:41:14 np0005539508 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 01:41:14 np0005539508 systemd[1]: Stopped Open-iSCSI.
Nov 29 01:41:14 np0005539508 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 01:41:14 np0005539508 systemd[1]: Starting Open-iSCSI...
Nov 29 01:41:14 np0005539508 systemd[1]: Started Open-iSCSI.
Nov 29 01:41:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 0 B/s wr, 170 op/s
Nov 29 01:41:15 np0005539508 python3.9[235610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:41:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:16 np0005539508 python3.9[235767]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:41:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:41:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 01:41:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.228 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:41:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:41:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:41:17 np0005539508 python3.9[235920]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:41:17 np0005539508 systemd[1]: Reloading.
Nov 29 01:41:17 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:41:17 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:41:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:17.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:18 np0005539508 python3.9[236105]: ansible-ansible.builtin.service_facts Invoked
Nov 29 01:41:18 np0005539508 network[236122]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 01:41:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 01:41:18 np0005539508 network[236123]: 'network-scripts' will be removed from distribution in near future.
Nov 29 01:41:18 np0005539508 network[236124]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 01:41:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:23 np0005539508 python3.9[236404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:41:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:41:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:24 np0005539508 python3.9[236557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:25 np0005539508 python3.9[236711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:26 np0005539508 python3.9[236864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:27 np0005539508 python3.9[237019]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:28 np0005539508 python3.9[237172]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:28 np0005539508 podman[237173]: 2025-11-29 06:41:28.10877708 +0000 UTC m=+0.076815352 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 01:41:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:28 np0005539508 python3.9[237346]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:41:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:41:29 np0005539508 python3.9[237502]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:41:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:30.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:31 np0005539508 python3.9[237676]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:31 np0005539508 python3.9[237859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:33 np0005539508 python3.9[238011]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:34 np0005539508 python3.9[238164]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:34 np0005539508 python3.9[238316]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:35 np0005539508 python3.9[238469]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:35.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:35 np0005539508 python3.9[238621]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:36 np0005539508 podman[238751]: 2025-11-29 06:41:36.445195937 +0000 UTC m=+0.058014940 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:41:36 np0005539508 podman[238763]: 2025-11-29 06:41:36.485122635 +0000 UTC m=+0.102539748 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 01:41:36 np0005539508 python3.9[238888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:37 np0005539508 python3.9[239102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:37.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:41:38 np0005539508 python3.9[239254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:38 np0005539508 python3.9[239408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:38 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 4e0edc45-8c0b-47fa-b728-02ba9928b432 does not exist
Nov 29 01:41:38 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9aeb3286-8032-4305-aa06-384a680ed67c does not exist
Nov 29 01:41:38 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev d39aece6-ed99-4e0f-abb0-7fada5d9c3a6 does not exist
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:41:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:41:39 np0005539508 python3.9[239671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.573871723 +0000 UTC m=+0.049959303 container create 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:41:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:39.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:39 np0005539508 systemd[1]: Started libpod-conmon-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope.
Nov 29 01:41:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.552019996 +0000 UTC m=+0.028107586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.65937537 +0000 UTC m=+0.135462980 container init 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.668349723 +0000 UTC m=+0.144437303 container start 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.672033127 +0000 UTC m=+0.148120707 container attach 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:41:39 np0005539508 cool_mendeleev[239739]: 167 167
Nov 29 01:41:39 np0005539508 systemd[1]: libpod-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope: Deactivated successfully.
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.675994899 +0000 UTC m=+0.152082479 container died 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:41:39 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4476dec3443a9e54e49e8fca746b2ace259f6a8fa23ebecbfed0c48376d42f8c-merged.mount: Deactivated successfully.
Nov 29 01:41:39 np0005539508 podman[239699]: 2025-11-29 06:41:39.718249653 +0000 UTC m=+0.194337233 container remove 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:41:39 np0005539508 systemd[1]: libpod-conmon-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope: Deactivated successfully.
Nov 29 01:41:39 np0005539508 podman[239839]: 2025-11-29 06:41:39.910400584 +0000 UTC m=+0.054289946 container create 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:41:39 np0005539508 systemd[1]: Started libpod-conmon-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope.
Nov 29 01:41:39 np0005539508 podman[239839]: 2025-11-29 06:41:39.886635742 +0000 UTC m=+0.030525114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:39 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:39 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:40 np0005539508 podman[239839]: 2025-11-29 06:41:40.005329066 +0000 UTC m=+0.149218438 container init 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:41:40 np0005539508 podman[239839]: 2025-11-29 06:41:40.022295606 +0000 UTC m=+0.166184938 container start 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:41:40 np0005539508 podman[239839]: 2025-11-29 06:41:40.026528215 +0000 UTC m=+0.170417537 container attach 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:41:40 np0005539508 python3.9[239910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:40 np0005539508 stupefied_black[239899]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:41:40 np0005539508 stupefied_black[239899]: --> relative data size: 1.0
Nov 29 01:41:40 np0005539508 stupefied_black[239899]: --> All data devices are unavailable
Nov 29 01:41:40 np0005539508 systemd[1]: libpod-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope: Deactivated successfully.
Nov 29 01:41:40 np0005539508 podman[239839]: 2025-11-29 06:41:40.931661865 +0000 UTC m=+1.075551227 container died 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:41:41 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6-merged.mount: Deactivated successfully.
Nov 29 01:41:41 np0005539508 podman[239839]: 2025-11-29 06:41:41.159996137 +0000 UTC m=+1.303885499 container remove 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:41:41 np0005539508 systemd[1]: libpod-conmon-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope: Deactivated successfully.
Nov 29 01:41:41 np0005539508 python3.9[240073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:41.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:41 np0005539508 python3.9[240354]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.004135902 +0000 UTC m=+0.063454544 container create 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 01:41:42 np0005539508 systemd[1]: Started libpod-conmon-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope.
Nov 29 01:41:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:41.9842512 +0000 UTC m=+0.043569872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.270131919 +0000 UTC m=+0.329450591 container init 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.278507076 +0000 UTC m=+0.337825708 container start 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:41:42 np0005539508 infallible_brown[240422]: 167 167
Nov 29 01:41:42 np0005539508 systemd[1]: libpod-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope: Deactivated successfully.
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.2935151 +0000 UTC m=+0.352833822 container attach 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.294242351 +0000 UTC m=+0.353561003 container died 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:41:42 np0005539508 systemd[1]: var-lib-containers-storage-overlay-81d6970558bb1a1bad9687bc56ac35226c665cdca10a633aa22a4ed217c10d43-merged.mount: Deactivated successfully.
Nov 29 01:41:42 np0005539508 podman[240383]: 2025-11-29 06:41:42.393575298 +0000 UTC m=+0.452893930 container remove 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:41:42 np0005539508 systemd[1]: libpod-conmon-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope: Deactivated successfully.
Nov 29 01:41:42 np0005539508 podman[240577]: 2025-11-29 06:41:42.587366874 +0000 UTC m=+0.026618593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:42 np0005539508 python3.9[240571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:41:42 np0005539508 podman[240577]: 2025-11-29 06:41:42.703168427 +0000 UTC m=+0.142420126 container create fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:41:42 np0005539508 systemd[1]: Started libpod-conmon-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope.
Nov 29 01:41:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:42 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:42 np0005539508 podman[240577]: 2025-11-29 06:41:42.913488691 +0000 UTC m=+0.352740410 container init fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:41:42 np0005539508 podman[240577]: 2025-11-29 06:41:42.921059575 +0000 UTC m=+0.360311284 container start fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:41:43 np0005539508 podman[240577]: 2025-11-29 06:41:43.014086264 +0000 UTC m=+0.453337973 container attach fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:41:43 np0005539508 python3.9[240751]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]: {
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:    "1": [
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:        {
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "devices": [
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "/dev/loop3"
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            ],
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "lv_name": "ceph_lv0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "lv_size": "7511998464",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "name": "ceph_lv0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "tags": {
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.cluster_name": "ceph",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.crush_device_class": "",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.encrypted": "0",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.osd_id": "1",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.type": "block",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:                "ceph.vdo": "0"
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            },
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "type": "block",
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:            "vg_name": "ceph_vg0"
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:        }
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]:    ]
Nov 29 01:41:43 np0005539508 flamboyant_cerf[240618]: }
Nov 29 01:41:43 np0005539508 systemd[1]: libpod-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope: Deactivated successfully.
Nov 29 01:41:43 np0005539508 podman[240577]: 2025-11-29 06:41:43.697375654 +0000 UTC m=+1.136627383 container died fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:41:43 np0005539508 systemd[1]: var-lib-containers-storage-overlay-08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a-merged.mount: Deactivated successfully.
Nov 29 01:41:43 np0005539508 podman[240577]: 2025-11-29 06:41:43.984524108 +0000 UTC m=+1.423775817 container remove fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:41:44 np0005539508 systemd[1]: libpod-conmon-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope: Deactivated successfully.
Nov 29 01:41:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:44 np0005539508 python3.9[240992]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 01:41:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.705622777 +0000 UTC m=+0.045985351 container create c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 01:41:44 np0005539508 systemd[1]: Started libpod-conmon-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope.
Nov 29 01:41:44 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.68660849 +0000 UTC m=+0.026971114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.782377746 +0000 UTC m=+0.122740340 container init c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.789287581 +0000 UTC m=+0.129650155 container start c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.792475001 +0000 UTC m=+0.132837605 container attach c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:41:44 np0005539508 dreamy_galois[241100]: 167 167
Nov 29 01:41:44 np0005539508 systemd[1]: libpod-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope: Deactivated successfully.
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.793346696 +0000 UTC m=+0.133709270 container died c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:41:44 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f283f6101a688c969a5065df13aeb537ef0960d50dd5566b3c59d2305a52806e-merged.mount: Deactivated successfully.
Nov 29 01:41:44 np0005539508 podman[241084]: 2025-11-29 06:41:44.826143893 +0000 UTC m=+0.166506467 container remove c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:41:44 np0005539508 systemd[1]: libpod-conmon-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope: Deactivated successfully.
Nov 29 01:41:45 np0005539508 podman[241159]: 2025-11-29 06:41:45.001002653 +0000 UTC m=+0.053011009 container create b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:41:45 np0005539508 podman[241159]: 2025-11-29 06:41:44.972002424 +0000 UTC m=+0.024010800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:41:45 np0005539508 systemd[1]: Started libpod-conmon-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope.
Nov 29 01:41:45 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:41:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:41:45 np0005539508 podman[241159]: 2025-11-29 06:41:45.248559449 +0000 UTC m=+0.300567855 container init b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:41:45 np0005539508 podman[241159]: 2025-11-29 06:41:45.264538501 +0000 UTC m=+0.316546867 container start b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 01:41:45 np0005539508 podman[241159]: 2025-11-29 06:41:45.271449396 +0000 UTC m=+0.323457772 container attach b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:41:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:45 np0005539508 python3.9[241272]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:41:45 np0005539508 systemd[1]: Reloading.
Nov 29 01:41:45 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:41:45 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]: {
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:        "osd_id": 1,
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:        "type": "bluestore"
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]:    }
Nov 29 01:41:46 np0005539508 upbeat_borg[241239]: }
Nov 29 01:41:46 np0005539508 systemd[1]: libpod-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope: Deactivated successfully.
Nov 29 01:41:46 np0005539508 podman[241159]: 2025-11-29 06:41:46.173758056 +0000 UTC m=+1.225766412 container died b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:41:46 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25-merged.mount: Deactivated successfully.
Nov 29 01:41:46 np0005539508 podman[241159]: 2025-11-29 06:41:46.238616799 +0000 UTC m=+1.290625135 container remove b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:41:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:46 np0005539508 systemd[1]: libpod-conmon-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope: Deactivated successfully.
Nov 29 01:41:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:41:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:41:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 7a41ce88-8c4b-4ec2-8f70-b0d368995dce does not exist
Nov 29 01:41:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 8ccc1637-f7ad-4480-a750-a02bc58330e4 does not exist
Nov 29 01:41:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 28d2e9dd-fba5-441f-af64-a7328a4523da does not exist
Nov 29 01:41:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:46 np0005539508 python3.9[241512]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:47.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:41:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:48.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:48 np0005539508 python3.9[241693]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:49 np0005539508 python3.9[241847]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:49 np0005539508 python3.9[242004]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:50.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:50 np0005539508 python3.9[242157]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:51 np0005539508 python3.9[242361]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:51 np0005539508 python3.9[242514]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:52.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:52 np0005539508 python3.9[242667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:41:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:41:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:41:54
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'vms']
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:41:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:54 np0005539508 python3.9[242821]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:55 np0005539508 python3.9[242974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:56 np0005539508 python3.9[243126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:56 np0005539508 python3.9[243278]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:57 np0005539508 python3.9[243431]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:41:58 np0005539508 python3.9[243583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:41:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:41:58 np0005539508 podman[243584]: 2025-11-29 06:41:58.326642976 +0000 UTC m=+0.060471299 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 01:41:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:41:58 np0005539508 python3.9[243757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:41:59 np0005539508 python3.9[243910]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:41:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:41:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:41:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:00 np0005539508 python3.9[244062]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:00 np0005539508 python3.9[244214]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:06.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:07 np0005539508 podman[244247]: 2025-11-29 06:42:07.080031866 +0000 UTC m=+0.049039497 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:42:07 np0005539508 podman[244248]: 2025-11-29 06:42:07.114675935 +0000 UTC m=+0.083425378 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 01:42:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:07.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:09.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:42:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:42:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:14.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:14 np0005539508 python3.9[244474]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 01:42:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:15 np0005539508 python3.9[244628]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 01:42:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:16 np0005539508 python3.9[244786]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 01:42:16 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:42:16 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:42:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:42:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:42:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:42:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:18 np0005539508 systemd-logind[797]: New session 51 of user zuul.
Nov 29 01:42:18 np0005539508 systemd[1]: Started Session 51 of User zuul.
Nov 29 01:42:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:18 np0005539508 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 01:42:18 np0005539508 systemd-logind[797]: Session 51 logged out. Waiting for processes to exit.
Nov 29 01:42:18 np0005539508 systemd-logind[797]: Removed session 51.
Nov 29 01:42:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:19 np0005539508 python3.9[244975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:20 np0005539508 python3.9[245096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398539.0246801-3438-244496785808796/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:21 np0005539508 python3.9[245246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:21.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:21 np0005539508 python3.9[245323]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:22 np0005539508 python3.9[245473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:22 np0005539508 python3.9[245594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398541.866962-3438-54032531683931/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:23 np0005539508 python3.9[245745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:24 np0005539508 python3.9[245866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398543.1266162-3438-138614804732523/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:24 np0005539508 python3.9[246018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:25 np0005539508 python3.9[246140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398544.3998175-3438-107750716453536/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:25.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:26 np0005539508 python3.9[246290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:26 np0005539508 python3.9[246411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398545.6622844-3438-69367847712193/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:27 np0005539508 python3.9[246566]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:42:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:27.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:28 np0005539508 python3.9[246720]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:42:28 np0005539508 podman[246721]: 2025-11-29 06:42:28.54971379 +0000 UTC m=+0.086954649 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 01:42:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:29 np0005539508 python3.9[246892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:42:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:42:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:29.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:42:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:42:30 np0005539508 python3.9[247044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:30 np0005539508 python3.9[247167]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764398549.4820871-3759-154584658180860/.source _original_basename=.5rxm8zm2 follow=False checksum=8dc8cde5f9871ff2228372ba7c6e010a4bfe6deb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 01:42:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:31 np0005539508 python3.9[247370]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:42:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:31.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:42:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:32.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:42:32 np0005539508 python3.9[247522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:32 np0005539508 python3.9[247643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398551.8392327-3837-262324410159520/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:33 np0005539508 python3.9[247794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 01:42:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:34 np0005539508 python3.9[247915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398553.2478092-3882-210242173309905/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 01:42:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:35 np0005539508 python3.9[248068]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 01:42:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:36.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:36 np0005539508 python3.9[248220]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 01:42:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:37 np0005539508 podman[248345]: 2025-11-29 06:42:37.234235744 +0000 UTC m=+0.075084803 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 01:42:37 np0005539508 podman[248347]: 2025-11-29 06:42:37.271048495 +0000 UTC m=+0.105746870 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 01:42:37 np0005539508 python3[248407]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 01:42:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:42.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:48 np0005539508 podman[248434]: 2025-11-29 06:42:48.002029703 +0000 UTC m=+10.431751183 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 01:42:48 np0005539508 podman[248638]: 2025-11-29 06:42:48.16825178 +0000 UTC m=+0.066300224 container create ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:42:48 np0005539508 podman[248638]: 2025-11-29 06:42:48.129223747 +0000 UTC m=+0.027272291 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 01:42:48 np0005539508 python3[248407]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 01:42:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:42:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:48.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:42:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:42:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:42:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:42:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a0d2ae4c-55a5-464b-a299-be500cfefe84 does not exist
Nov 29 01:42:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 14d7e38b-c356-4bc7-9820-7165261f234d does not exist
Nov 29 01:42:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f887f232-98b1-433a-b3c5-8e89cd87998a does not exist
Nov 29 01:42:51 np0005539508 python3.9[248842]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:42:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:52 np0005539508 python3.9[249046]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:42:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:42:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:42:54 np0005539508 python3.9[249239]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:42:54
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:42:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:54 np0005539508 podman[249367]: 2025-11-29 06:42:54.438176088 +0000 UTC m=+0.047070661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:42:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:42:54 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:42:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:55 np0005539508 python3[249509]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 01:42:55 np0005539508 podman[249367]: 2025-11-29 06:42:55.617609649 +0000 UTC m=+1.226504172 container create 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:42:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:42:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:42:56 np0005539508 systemd[1]: Started libpod-conmon-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope.
Nov 29 01:42:56 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:42:56 np0005539508 podman[249367]: 2025-11-29 06:42:56.269357518 +0000 UTC m=+1.878252101 container init 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:42:56 np0005539508 podman[249367]: 2025-11-29 06:42:56.282393576 +0000 UTC m=+1.891288109 container start 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 01:42:56 np0005539508 admiring_kirch[249536]: 167 167
Nov 29 01:42:56 np0005539508 systemd[1]: libpod-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope: Deactivated successfully.
Nov 29 01:42:56 np0005539508 conmon[249536]: conmon 61a53c3dbd62fa12dcd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope/container/memory.events
Nov 29 01:42:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:42:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:42:56 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:42:56 np0005539508 podman[249367]: 2025-11-29 06:42:56.326542114 +0000 UTC m=+1.935436637 container attach 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:42:56 np0005539508 podman[249367]: 2025-11-29 06:42:56.327192972 +0000 UTC m=+1.936087465 container died 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:42:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b3938f4c4b2856b4d0b3538d3909da9246a6c305cb2f84f7c9a1dc59313dfef7-merged.mount: Deactivated successfully.
Nov 29 01:42:56 np0005539508 podman[249367]: 2025-11-29 06:42:56.400968717 +0000 UTC m=+2.009863220 container remove 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:42:56 np0005539508 systemd[1]: libpod-conmon-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope: Deactivated successfully.
Nov 29 01:42:56 np0005539508 podman[249553]: 2025-11-29 06:42:56.414819509 +0000 UTC m=+0.098730642 container create e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:42:56 np0005539508 podman[249553]: 2025-11-29 06:42:56.356694086 +0000 UTC m=+0.040605329 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 01:42:56 np0005539508 python3[249509]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 01:42:56 np0005539508 podman[249601]: 2025-11-29 06:42:56.581944951 +0000 UTC m=+0.058179726 container create 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:42:56 np0005539508 podman[249601]: 2025-11-29 06:42:56.55186376 +0000 UTC m=+0.028098535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:42:56 np0005539508 systemd[1]: Started libpod-conmon-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope.
Nov 29 01:42:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:56 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:42:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:57 np0005539508 podman[249601]: 2025-11-29 06:42:57.00477414 +0000 UTC m=+0.481008945 container init 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 01:42:57 np0005539508 podman[249601]: 2025-11-29 06:42:57.011796038 +0000 UTC m=+0.488030853 container start 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:42:57 np0005539508 podman[249601]: 2025-11-29 06:42:57.230092637 +0000 UTC m=+0.706327432 container attach 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 29 01:42:57 np0005539508 python3.9[249784]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:42:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:57 np0005539508 flamboyant_mendel[249651]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:42:57 np0005539508 flamboyant_mendel[249651]: --> relative data size: 1.0
Nov 29 01:42:57 np0005539508 flamboyant_mendel[249651]: --> All data devices are unavailable
Nov 29 01:42:57 np0005539508 systemd[1]: libpod-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope: Deactivated successfully.
Nov 29 01:42:57 np0005539508 podman[249601]: 2025-11-29 06:42:57.815184902 +0000 UTC m=+1.291419707 container died 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:42:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e-merged.mount: Deactivated successfully.
Nov 29 01:42:57 np0005539508 podman[249601]: 2025-11-29 06:42:57.879657814 +0000 UTC m=+1.355892589 container remove 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:42:57 np0005539508 systemd[1]: libpod-conmon-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope: Deactivated successfully.
Nov 29 01:42:58 np0005539508 python3.9[250019]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:42:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:58.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.498799911 +0000 UTC m=+0.040730292 container create cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:42:58 np0005539508 systemd[1]: Started libpod-conmon-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope.
Nov 29 01:42:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.574070019 +0000 UTC m=+0.116000420 container init cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.479119325 +0000 UTC m=+0.021049736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.58119716 +0000 UTC m=+0.123127541 container start cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.585521182 +0000 UTC m=+0.127451563 container attach cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:42:58 np0005539508 naughty_thompson[250195]: 167 167
Nov 29 01:42:58 np0005539508 systemd[1]: libpod-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope: Deactivated successfully.
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.590215355 +0000 UTC m=+0.132145746 container died cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 01:42:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-838e9d882b3a74df8253e100d8a6de1d57eb4b7081db25e329035e00d7f39ca2-merged.mount: Deactivated successfully.
Nov 29 01:42:58 np0005539508 podman[250155]: 2025-11-29 06:42:58.631269205 +0000 UTC m=+0.173199586 container remove cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:42:58 np0005539508 systemd[1]: libpod-conmon-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope: Deactivated successfully.
Nov 29 01:42:58 np0005539508 podman[250200]: 2025-11-29 06:42:58.697571809 +0000 UTC m=+0.071452260 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:42:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:42:58 np0005539508 podman[250286]: 2025-11-29 06:42:58.806497957 +0000 UTC m=+0.046346311 container create a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:42:58 np0005539508 systemd[1]: Started libpod-conmon-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope.
Nov 29 01:42:58 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:42:58 np0005539508 podman[250286]: 2025-11-29 06:42:58.788086437 +0000 UTC m=+0.027934821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:42:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:58 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:42:58 np0005539508 podman[250286]: 2025-11-29 06:42:58.897727335 +0000 UTC m=+0.137575709 container init a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:42:58 np0005539508 podman[250286]: 2025-11-29 06:42:58.910572768 +0000 UTC m=+0.150421122 container start a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:42:58 np0005539508 podman[250286]: 2025-11-29 06:42:58.91417112 +0000 UTC m=+0.154019574 container attach a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:42:59 np0005539508 python3.9[250328]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764398578.3347924-4158-76573261778431/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:42:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:42:59 np0005539508 python3.9[250413]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:42:59 np0005539508 systemd[1]: Reloading.
Nov 29 01:42:59 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:42:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:42:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:42:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:42:59 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:42:59 np0005539508 zealous_austin[250332]: {
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:    "1": [
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:        {
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "devices": [
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "/dev/loop3"
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            ],
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "lv_name": "ceph_lv0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "lv_size": "7511998464",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "name": "ceph_lv0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "tags": {
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.cluster_name": "ceph",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.crush_device_class": "",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.encrypted": "0",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.osd_id": "1",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.type": "block",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:                "ceph.vdo": "0"
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            },
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "type": "block",
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:            "vg_name": "ceph_vg0"
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:        }
Nov 29 01:42:59 np0005539508 zealous_austin[250332]:    ]
Nov 29 01:42:59 np0005539508 zealous_austin[250332]: }
Nov 29 01:42:59 np0005539508 podman[250286]: 2025-11-29 06:42:59.789675782 +0000 UTC m=+1.029524136 container died a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 01:42:59 np0005539508 systemd[1]: libpod-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope: Deactivated successfully.
Nov 29 01:42:59 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed-merged.mount: Deactivated successfully.
Nov 29 01:43:00 np0005539508 podman[250286]: 2025-11-29 06:43:00.000642104 +0000 UTC m=+1.240490488 container remove a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:43:00 np0005539508 systemd[1]: libpod-conmon-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope: Deactivated successfully.
Nov 29 01:43:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:00 np0005539508 python3.9[250624]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 01:43:00 np0005539508 systemd[1]: Reloading.
Nov 29 01:43:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:00 np0005539508 podman[250687]: 2025-11-29 06:43:00.741449628 +0000 UTC m=+0.060933143 container create f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:43:00 np0005539508 podman[250687]: 2025-11-29 06:43:00.720121165 +0000 UTC m=+0.039604720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:43:00 np0005539508 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:43:00 np0005539508 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 01:43:01 np0005539508 systemd[1]: Started libpod-conmon-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope.
Nov 29 01:43:01 np0005539508 systemd[1]: Starting nova_compute container...
Nov 29 01:43:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:43:01 np0005539508 podman[250687]: 2025-11-29 06:43:01.126678125 +0000 UTC m=+0.446161730 container init f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 01:43:01 np0005539508 podman[250687]: 2025-11-29 06:43:01.141946716 +0000 UTC m=+0.461430261 container start f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:43:01 np0005539508 elated_jepsen[250743]: 167 167
Nov 29 01:43:01 np0005539508 systemd[1]: libpod-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope: Deactivated successfully.
Nov 29 01:43:01 np0005539508 podman[250687]: 2025-11-29 06:43:01.15056713 +0000 UTC m=+0.470050675 container attach f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:43:01 np0005539508 podman[250687]: 2025-11-29 06:43:01.151530347 +0000 UTC m=+0.471013872 container died f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:43:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:43:01 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8905dc5f8b08d98e39bfe12b606aa0f2981fcb63de57eff13463c088bf372ca0-merged.mount: Deactivated successfully.
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 podman[250687]: 2025-11-29 06:43:01.195553721 +0000 UTC m=+0.515037236 container remove f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:43:01 np0005539508 podman[250744]: 2025-11-29 06:43:01.20542827 +0000 UTC m=+0.122262566 container init e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 01:43:01 np0005539508 systemd[1]: libpod-conmon-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope: Deactivated successfully.
Nov 29 01:43:01 np0005539508 podman[250744]: 2025-11-29 06:43:01.218037817 +0000 UTC m=+0.134872063 container start e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:43:01 np0005539508 podman[250744]: nova_compute
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + sudo -E kolla_set_configs
Nov 29 01:43:01 np0005539508 systemd[1]: Started nova_compute container.
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Validating config file
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying service configuration files
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Deleting /etc/ceph
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Creating directory /etc/ceph
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Writing out command to execute
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:01 np0005539508 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 01:43:01 np0005539508 nova_compute[250764]: ++ cat /run_command
Nov 29 01:43:01 np0005539508 podman[250793]: 2025-11-29 06:43:01.356712136 +0000 UTC m=+0.022442976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + CMD=nova-compute
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + ARGS=
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + sudo kolla_copy_cacerts
Nov 29 01:43:01 np0005539508 podman[250793]: 2025-11-29 06:43:01.466373205 +0000 UTC m=+0.132103995 container create 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + [[ ! -n '' ]]
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + . kolla_extend_start
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 01:43:01 np0005539508 nova_compute[250764]: Running command: 'nova-compute'
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + umask 0022
Nov 29 01:43:01 np0005539508 nova_compute[250764]: + exec nova-compute
Nov 29 01:43:01 np0005539508 systemd[1]: Started libpod-conmon-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope.
Nov 29 01:43:01 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:01 np0005539508 podman[250793]: 2025-11-29 06:43:01.602859262 +0000 UTC m=+0.268590072 container init 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:43:01 np0005539508 podman[250793]: 2025-11-29 06:43:01.619128552 +0000 UTC m=+0.284859352 container start 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 01:43:01 np0005539508 podman[250793]: 2025-11-29 06:43:01.62332564 +0000 UTC m=+0.289056440 container attach 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:43:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:01.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]: {
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:        "osd_id": 1,
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:        "type": "bluestore"
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]:    }
Nov 29 01:43:02 np0005539508 compassionate_noether[250834]: }
Nov 29 01:43:02 np0005539508 systemd[1]: libpod-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope: Deactivated successfully.
Nov 29 01:43:02 np0005539508 podman[250793]: 2025-11-29 06:43:02.61197188 +0000 UTC m=+1.277702660 container died 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:43:02 np0005539508 python3.9[250976]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:43:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a-merged.mount: Deactivated successfully.
Nov 29 01:43:03 np0005539508 podman[250793]: 2025-11-29 06:43:03.086469129 +0000 UTC m=+1.752199909 container remove 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:43:03 np0005539508 systemd[1]: libpod-conmon-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope: Deactivated successfully.
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:43:03 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev b0cbca7c-1c60-43fa-86ea-43504293d67a does not exist
Nov 29 01:43:03 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev be01cf1e-96c0-4640-9bc5-27e4745b4bb2 does not exist
Nov 29 01:43:03 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 30b101ee-8f54-4c1b-b61f-80858004a8d0 does not exist
Nov 29 01:43:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:03.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:03 np0005539508 python3.9[251195]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:43:03 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.089 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.090 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.090 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.091 250780 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.252 250780 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.283 250780 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.284 250780 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 01:43:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:04.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:04 np0005539508 python3.9[251349]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.793 250780 INFO nova.virt.driver [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.951 250780 INFO nova.compute.provider_config [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.969 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.970 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.970 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:04 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 WARNING oslo_config.cfg [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 01:43:05 np0005539508 nova_compute[250764]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 01:43:05 np0005539508 nova_compute[250764]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 01:43:05 np0005539508 nova_compute[250764]: and ``live_migration_inbound_addr`` respectively.
Nov 29 01:43:05 np0005539508 nova_compute[250764]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_secret_uuid        = 336ec58c-893b-528f-a0c1-6ed1196bc047 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.139 250780 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.162 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 01:43:05 np0005539508 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 01:43:05 np0005539508 systemd[1]: Started libvirt QEMU daemon.
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.231 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb5007db7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.235 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb5007db7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.236 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.259 250780 WARNING nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 01:43:05 np0005539508 nova_compute[250764]: 2025-11-29 06:43:05.259 250780 DEBUG nova.virt.libvirt.volume.mount [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 01:43:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:05.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:05 np0005539508 python3.9[251554]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 01:43:06 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:43:06 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.196 250780 INFO nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <host>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <uuid>c87c7517-e569-4e42-8023-b11f25bc4e0c</uuid>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <arch>x86_64</arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model>EPYC-Rome-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <vendor>AMD</vendor>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <microcode version='16777317'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <signature family='23' model='49' stepping='0'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='x2apic'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='tsc-deadline'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='osxsave'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='hypervisor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='tsc_adjust'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='spec-ctrl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='stibp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='arch-capabilities'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='cmp_legacy'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='topoext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='virt-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='lbrv'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='tsc-scale'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='vmcb-clean'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='pause-filter'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='pfthreshold'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='svme-addr-chk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='rdctl-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='mds-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature name='pschange-mc-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <pages unit='KiB' size='4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <pages unit='KiB' size='2048'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <pages unit='KiB' size='1048576'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <power_management>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <suspend_mem/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </power_management>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <iommu support='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <migration_features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <live/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <uri_transports>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <uri_transport>tcp</uri_transport>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <uri_transport>rdma</uri_transport>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </uri_transports>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </migration_features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <topology>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <cells num='1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <cell id='0'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <memory unit='KiB'>7864324</memory>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <distances>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <sibling id='0' value='10'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          </distances>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          <cpus num='8'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:          </cpus>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        </cell>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </cells>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </topology>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <cache>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </cache>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <secmodel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model>selinux</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <doi>0</doi>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </secmodel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <secmodel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model>dac</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <doi>0</doi>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </secmodel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </host>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <guest>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <os_type>hvm</os_type>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <arch name='i686'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <wordsize>32</wordsize>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <domain type='qemu'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <domain type='kvm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <pae/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <nonpae/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <acpi default='on' toggle='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <apic default='on' toggle='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <cpuselection/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <deviceboot/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <disksnapshot default='on' toggle='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <externalSnapshot/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </guest>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <guest>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <os_type>hvm</os_type>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <arch name='x86_64'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <wordsize>64</wordsize>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <domain type='qemu'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <domain type='kvm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <acpi default='on' toggle='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <apic default='on' toggle='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <cpuselection/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <deviceboot/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <disksnapshot default='on' toggle='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <externalSnapshot/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </guest>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </capabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: #033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.204 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.229 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 01:43:06 np0005539508 nova_compute[250764]: <domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <domain>kvm</domain>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <arch>i686</arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <vcpu max='4096'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <iothreads supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <os supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='firmware'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <loader supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>rom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pflash</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='readonly'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>yes</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='secure'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </loader>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </os>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='maximumMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <vendor>AMD</vendor>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='succor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='custom' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-128'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-256'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-512'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <memoryBacking supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='sourceType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>anonymous</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>memfd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </memoryBacking>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <disk supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='diskDevice'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>disk</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cdrom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>floppy</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>lun</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>fdc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>sata</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </disk>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <graphics supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vnc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egl-headless</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </graphics>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <video supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='modelType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vga</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cirrus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>none</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>bochs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ramfb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </video>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hostdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='mode'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>subsystem</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='startupPolicy'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>mandatory</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>requisite</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>optional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='subsysType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pci</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='capsType'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='pciBackend'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hostdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <rng supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>random</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </rng>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <filesystem supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='driverType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>path</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>handle</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtiofs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </filesystem>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <tpm supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-tis</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-crb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emulator</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>external</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendVersion'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>2.0</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </tpm>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <redirdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </redirdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <channel supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </channel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <crypto supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </crypto>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <interface supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>passt</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </interface>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <panic supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>isa</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>hyperv</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </panic>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <console supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>null</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dev</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pipe</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stdio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>udp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tcp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu-vdagent</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </console>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <gic supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <genid supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backup supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <async-teardown supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <ps2 supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sev supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sgx supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hyperv supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='features'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>relaxed</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vapic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>spinlocks</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vpindex</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>runtime</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>synic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stimer</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reset</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vendor_id</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>frequencies</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reenlightenment</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tlbflush</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ipi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>avic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emsr_bitmap</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>xmm_input</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hyperv>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <launchSecurity supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='sectype'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tdx</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </launchSecurity>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.238 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 01:43:06 np0005539508 nova_compute[250764]: <domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <domain>kvm</domain>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <arch>i686</arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <vcpu max='240'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <iothreads supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <os supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='firmware'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <loader supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>rom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pflash</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='readonly'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>yes</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='secure'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </loader>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </os>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='maximumMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <vendor>AMD</vendor>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='succor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='custom' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-128'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-256'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-512'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <memoryBacking supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='sourceType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>anonymous</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>memfd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </memoryBacking>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <disk supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='diskDevice'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>disk</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cdrom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>floppy</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>lun</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ide</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>fdc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>sata</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </disk>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <graphics supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vnc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egl-headless</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </graphics>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <video supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='modelType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vga</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cirrus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>none</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>bochs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ramfb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </video>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hostdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='mode'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>subsystem</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='startupPolicy'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>mandatory</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>requisite</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>optional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='subsysType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pci</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='capsType'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='pciBackend'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hostdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <rng supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>random</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </rng>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <filesystem supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='driverType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>path</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>handle</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtiofs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </filesystem>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <tpm supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-tis</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-crb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emulator</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>external</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendVersion'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>2.0</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </tpm>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <redirdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </redirdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <channel supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </channel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <crypto supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </crypto>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <interface supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>passt</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </interface>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <panic supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>isa</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>hyperv</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </panic>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <console supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>null</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dev</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pipe</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stdio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>udp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tcp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu-vdagent</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </console>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <gic supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <genid supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backup supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <async-teardown supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <ps2 supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sev supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sgx supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hyperv supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='features'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>relaxed</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vapic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>spinlocks</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vpindex</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>runtime</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>synic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stimer</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reset</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vendor_id</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>frequencies</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reenlightenment</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tlbflush</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ipi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>avic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emsr_bitmap</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>xmm_input</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hyperv>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <launchSecurity supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='sectype'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tdx</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </launchSecurity>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.272 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.275 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 01:43:06 np0005539508 nova_compute[250764]: <domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <domain>kvm</domain>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <arch>x86_64</arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <vcpu max='4096'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <iothreads supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <os supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='firmware'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>efi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <loader supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>rom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pflash</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='readonly'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>yes</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='secure'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>yes</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </loader>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </os>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='maximumMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <vendor>AMD</vendor>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='succor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='custom' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-128'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-256'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-512'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <memoryBacking supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='sourceType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>anonymous</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>memfd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </memoryBacking>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <disk supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='diskDevice'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>disk</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cdrom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>floppy</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>lun</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>fdc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>sata</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </disk>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <graphics supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vnc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egl-headless</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </graphics>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <video supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='modelType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vga</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cirrus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>none</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>bochs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ramfb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </video>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hostdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='mode'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>subsystem</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='startupPolicy'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>mandatory</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>requisite</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>optional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='subsysType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pci</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='capsType'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='pciBackend'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hostdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <rng supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>random</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </rng>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <filesystem supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='driverType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>path</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>handle</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtiofs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </filesystem>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <tpm supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-tis</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-crb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emulator</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>external</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendVersion'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>2.0</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </tpm>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <redirdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </redirdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <channel supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </channel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <crypto supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </crypto>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <interface supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>passt</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </interface>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <panic supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>isa</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>hyperv</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </panic>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <console supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>null</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dev</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pipe</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stdio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>udp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tcp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu-vdagent</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </console>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <gic supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <genid supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backup supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <async-teardown supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <ps2 supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sev supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sgx supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hyperv supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='features'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>relaxed</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vapic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>spinlocks</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vpindex</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>runtime</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>synic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stimer</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reset</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vendor_id</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>frequencies</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reenlightenment</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tlbflush</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ipi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>avic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emsr_bitmap</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>xmm_input</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hyperv>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <launchSecurity supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='sectype'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tdx</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </launchSecurity>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.352 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 01:43:06 np0005539508 nova_compute[250764]: <domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <domain>kvm</domain>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <arch>x86_64</arch>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <vcpu max='240'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <iothreads supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <os supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='firmware'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <loader supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>rom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pflash</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='readonly'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>yes</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='secure'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>no</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </loader>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </os>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='maximumMigratable'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>on</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>off</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <vendor>AMD</vendor>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='succor'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <mode name='custom' supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Denverton-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='auto-ibrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amd-psfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='stibp-always-on'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='EPYC-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-128'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-256'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx10-512'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='prefetchiti'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Haswell-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512er'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512pf'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fma4'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tbm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xop'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='amx-tile'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-bf16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-fp16'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bitalg'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrc'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fzrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='la57'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='taa-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xfd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ifma'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cmpccxadd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fbsdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='fsrs'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ibrs-all'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mcdt-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pbrsb-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='psdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='serialize'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vaes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='hle'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='rtm'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512bw'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512cd'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512dq'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512f'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='avx512vl'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='invpcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pcid'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='pku'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='mpx'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='core-capability'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='split-lock-detect'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='cldemote'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='erms'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='gfni'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdir64b'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='movdiri'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='xsaves'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='athlon-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='core2duo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='coreduo-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='n270-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='ss'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <blockers model='phenom-v1'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnow'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <feature name='3dnowext'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </blockers>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </mode>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <memoryBacking supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <enum name='sourceType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>anonymous</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <value>memfd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </memoryBacking>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <disk supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='diskDevice'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>disk</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cdrom</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>floppy</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>lun</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ide</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>fdc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>sata</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </disk>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <graphics supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vnc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egl-headless</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </graphics>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <video supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='modelType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vga</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>cirrus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>none</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>bochs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ramfb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </video>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hostdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='mode'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>subsystem</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='startupPolicy'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>mandatory</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>requisite</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>optional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='subsysType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pci</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>scsi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='capsType'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='pciBackend'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hostdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <rng supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtio-non-transitional</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>random</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>egd</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </rng>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <filesystem supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='driverType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>path</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>handle</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>virtiofs</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </filesystem>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <tpm supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-tis</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tpm-crb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emulator</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>external</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendVersion'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>2.0</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </tpm>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <redirdev supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='bus'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>usb</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </redirdev>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <channel supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </channel>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <crypto supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendModel'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>builtin</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </crypto>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <interface supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='backendType'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>default</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>passt</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </interface>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <panic supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='model'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>isa</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>hyperv</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </panic>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <console supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='type'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>null</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vc</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pty</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dev</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>file</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>pipe</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stdio</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>udp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tcp</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>unix</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>qemu-vdagent</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>dbus</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </console>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </devices>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <gic supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <genid supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <backup supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <async-teardown supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <ps2 supported='yes'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sev supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <sgx supported='no'/>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <hyperv supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='features'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>relaxed</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vapic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>spinlocks</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vpindex</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>runtime</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>synic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>stimer</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reset</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>vendor_id</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>frequencies</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>reenlightenment</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tlbflush</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>ipi</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>avic</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>emsr_bitmap</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>xmm_input</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </defaults>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </hyperv>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    <launchSecurity supported='yes'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      <enum name='sectype'>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:        <value>tdx</value>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:      </enum>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:    </launchSecurity>
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  </features>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </domainCapabilities>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.417 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.418 250780 INFO nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Secure Boot support detected#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.420 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.420 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.433 250780 DEBUG nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 01:43:06 np0005539508 nova_compute[250764]:  <model>Nehalem</model>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: </cpu>
Nov 29 01:43:06 np0005539508 nova_compute[250764]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.437 250780 DEBUG nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.471 250780 INFO nova.virt.node [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.565 250780 WARNING nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Compute nodes ['36ed0248-8d04-4532-95bb-daab89f12202'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.606 250780 INFO nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 01:43:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.765 250780 WARNING nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.765 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG nova.compute.resource_tracker [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:43:06 np0005539508 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.processutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:43:07 np0005539508 python3.9[251742]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:43:07 np0005539508 systemd[1]: Stopping nova_compute container...
Nov 29 01:43:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:43:07 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326258606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:43:07 np0005539508 nova_compute[250764]: 2025-11-29 06:43:07.200 250780 DEBUG oslo_concurrency.processutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:43:07 np0005539508 nova_compute[250764]: 2025-11-29 06:43:07.214 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 01:43:07 np0005539508 nova_compute[250764]: 2025-11-29 06:43:07.215 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 01:43:07 np0005539508 nova_compute[250764]: 2025-11-29 06:43:07.215 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 01:43:07 np0005539508 podman[251784]: 2025-11-29 06:43:07.592352785 +0000 UTC m=+0.064373551 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 01:43:07 np0005539508 virtqemud[251417]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 01:43:07 np0005539508 virtqemud[251417]: hostname: compute-0
Nov 29 01:43:07 np0005539508 virtqemud[251417]: End of file while reading data: Input/output error
Nov 29 01:43:07 np0005539508 systemd[1]: libpod-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4.scope: Deactivated successfully.
Nov 29 01:43:07 np0005539508 systemd[1]: libpod-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4.scope: Consumed 4.017s CPU time.
Nov 29 01:43:07 np0005539508 podman[251767]: 2025-11-29 06:43:07.60847228 +0000 UTC m=+0.452775015 container died e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:43:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4-userdata-shm.mount: Deactivated successfully.
Nov 29 01:43:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb-merged.mount: Deactivated successfully.
Nov 29 01:43:07 np0005539508 podman[251785]: 2025-11-29 06:43:07.657023112 +0000 UTC m=+0.126599728 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:43:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:43:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:43:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:10 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:10 np0005539508 podman[251767]: 2025-11-29 06:43:10.337968808 +0000 UTC m=+3.182271533 container cleanup e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:43:10 np0005539508 podman[251767]: nova_compute
Nov 29 01:43:10 np0005539508 podman[251848]: nova_compute
Nov 29 01:43:10 np0005539508 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 01:43:10 np0005539508 systemd[1]: Stopped nova_compute container.
Nov 29 01:43:10 np0005539508 systemd[1]: Starting nova_compute container...
Nov 29 01:43:10 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:43:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:10 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:10 np0005539508 podman[251861]: 2025-11-29 06:43:10.633233048 +0000 UTC m=+0.187577193 container init e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:43:10 np0005539508 podman[251861]: 2025-11-29 06:43:10.646142562 +0000 UTC m=+0.200486657 container start e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 01:43:10 np0005539508 podman[251861]: nova_compute
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + sudo -E kolla_set_configs
Nov 29 01:43:10 np0005539508 systemd[1]: Started nova_compute container.
Nov 29 01:43:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Validating config file
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying service configuration files
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /etc/ceph
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Creating directory /etc/ceph
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Writing out command to execute
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:10 np0005539508 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 01:43:10 np0005539508 nova_compute[251877]: ++ cat /run_command
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + CMD=nova-compute
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + ARGS=
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + sudo kolla_copy_cacerts
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + [[ ! -n '' ]]
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + . kolla_extend_start
Nov 29 01:43:10 np0005539508 nova_compute[251877]: Running command: 'nova-compute'
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + umask 0022
Nov 29 01:43:10 np0005539508 nova_compute[251877]: + exec nova-compute
Nov 29 01:43:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:12 np0005539508 python3.9[252094]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 01:43:12 np0005539508 systemd[1]: Started libpod-conmon-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope.
Nov 29 01:43:12 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:43:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:12 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 01:43:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.760 251881 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 01:43:12 np0005539508 podman[252119]: 2025-11-29 06:43:12.829316885 +0000 UTC m=+0.228603661 container init ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, tcib_managed=true)
Nov 29 01:43:12 np0005539508 podman[252119]: 2025-11-29 06:43:12.850460001 +0000 UTC m=+0.249746797 container start ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.917 251881 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 01:43:12 np0005539508 nova_compute_init[252143]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 01:43:12 np0005539508 python3.9[252094]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 01:43:12 np0005539508 systemd[1]: libpod-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope: Deactivated successfully.
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.947 251881 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:43:12 np0005539508 nova_compute[251877]: 2025-11-29 06:43:12.948 251881 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 01:43:13 np0005539508 podman[252145]: 2025-11-29 06:43:12.999481426 +0000 UTC m=+0.044398384 container died ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0)
Nov 29 01:43:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b-userdata-shm.mount: Deactivated successfully.
Nov 29 01:43:13 np0005539508 systemd[1]: var-lib-containers-storage-overlay-79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6-merged.mount: Deactivated successfully.
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:43:13 np0005539508 podman[252147]: 2025-11-29 06:43:13.052766549 +0000 UTC m=+0.093439077 container cleanup ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:43:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:43:13 np0005539508 systemd[1]: libpod-conmon-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope: Deactivated successfully.
Nov 29 01:43:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:13.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:43:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:14.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:43:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:16 np0005539508 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 01:43:16 np0005539508 systemd[1]: session-50.scope: Consumed 2min 30.922s CPU time.
Nov 29 01:43:16 np0005539508 systemd-logind[797]: Session 50 logged out. Waiting for processes to exit.
Nov 29 01:43:16 np0005539508 systemd-logind[797]: Removed session 50.
Nov 29 01:43:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:16.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:43:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.232 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:43:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:43:17 np0005539508 nova_compute[251877]: 2025-11-29 06:43:17.295 251881 INFO nova.virt.driver [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 01:43:17 np0005539508 nova_compute[251877]: 2025-11-29 06:43:17.414 251881 INFO nova.compute.provider_config [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 01:43:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.951 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.951 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.952 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.960 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.960 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.961 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.961 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.965 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.965 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.966 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.966 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.970 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.970 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.971 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.971 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.990 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.990 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.992 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.992 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.994 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.994 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.996 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.996 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.998 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.998 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:18 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.000 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.000 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 WARNING oslo_config.cfg [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 01:43:19 np0005539508 nova_compute[251877]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 01:43:19 np0005539508 nova_compute[251877]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 01:43:19 np0005539508 nova_compute[251877]: and ``live_migration_inbound_addr`` respectively.
Nov 29 01:43:19 np0005539508 nova_compute[251877]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_secret_uuid        = 336ec58c-893b-528f-a0c1-6ed1196bc047 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.135 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.136 251881 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 01:43:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.930 251881 INFO nova.virt.node [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.932 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.944 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4540490f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.947 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4540490f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.951 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.960 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <host>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <uuid>c87c7517-e569-4e42-8023-b11f25bc4e0c</uuid>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <cpu>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <arch>x86_64</arch>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model>EPYC-Rome-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <vendor>AMD</vendor>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <microcode version='16777317'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <signature family='23' model='49' stepping='0'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='x2apic'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='tsc-deadline'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='osxsave'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='hypervisor'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='tsc_adjust'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='spec-ctrl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='stibp'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='arch-capabilities'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='ssbd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='cmp_legacy'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='topoext'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='virt-ssbd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='lbrv'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='tsc-scale'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='vmcb-clean'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='pause-filter'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='pfthreshold'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='svme-addr-chk'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='rdctl-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='mds-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature name='pschange-mc-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <pages unit='KiB' size='4'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <pages unit='KiB' size='2048'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <pages unit='KiB' size='1048576'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </cpu>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <power_management>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <suspend_mem/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </power_management>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <iommu support='no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <migration_features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <live/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <uri_transports>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <uri_transport>tcp</uri_transport>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <uri_transport>rdma</uri_transport>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </uri_transports>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </migration_features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <topology>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <cells num='1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <cell id='0'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <memory unit='KiB'>7864324</memory>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <distances>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <sibling id='0' value='10'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          </distances>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          <cpus num='8'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:          </cpus>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        </cell>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </cells>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </topology>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <cache>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </cache>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <secmodel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model>selinux</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <doi>0</doi>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </secmodel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <secmodel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model>dac</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <doi>0</doi>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </secmodel>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  </host>
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <guest>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <os_type>hvm</os_type>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <arch name='i686'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <wordsize>32</wordsize>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <domain type='qemu'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <domain type='kvm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </arch>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <pae/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <nonpae/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <acpi default='on' toggle='yes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <apic default='on' toggle='no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <cpuselection/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <deviceboot/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <disksnapshot default='on' toggle='no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <externalSnapshot/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  </guest>
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <guest>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <os_type>hvm</os_type>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <arch name='x86_64'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <wordsize>64</wordsize>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <domain type='qemu'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <domain type='kvm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </arch>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <acpi default='on' toggle='yes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <apic default='on' toggle='no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <cpuselection/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <deviceboot/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <disksnapshot default='on' toggle='no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <externalSnapshot/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </features>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  </guest>
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 
Nov 29 01:43:19 np0005539508 nova_compute[251877]: </capabilities>
Nov 29 01:43:19 np0005539508 nova_compute[251877]: #033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.967 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 01:43:19 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.971 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 01:43:19 np0005539508 nova_compute[251877]: <domainCapabilities>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <domain>kvm</domain>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <arch>i686</arch>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <vcpu max='4096'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <iothreads supported='yes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <os supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <enum name='firmware'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <loader supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>rom</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>pflash</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <enum name='readonly'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>yes</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <enum name='secure'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </loader>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  </os>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:  <cpu>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <enum name='maximumMigratable'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <vendor>AMD</vendor>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='succor'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:    <mode name='custom' supported='yes'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Denverton'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v4'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx10'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx10-128'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx10-256'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx10-512'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v4'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:19 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <memoryBacking supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='sourceType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>anonymous</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>memfd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </memoryBacking>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <disk supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='diskDevice'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>disk</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cdrom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>floppy</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>lun</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>fdc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>sata</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </disk>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <graphics supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vnc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egl-headless</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </graphics>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <video supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='modelType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vga</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cirrus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>none</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>bochs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ramfb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </video>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hostdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='mode'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>subsystem</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='startupPolicy'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>mandatory</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>requisite</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>optional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='subsysType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pci</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='capsType'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='pciBackend'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hostdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <rng supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>random</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </rng>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <filesystem supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='driverType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>path</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>handle</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtiofs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </filesystem>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <tpm supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-tis</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-crb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emulator</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>external</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendVersion'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>2.0</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </tpm>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <redirdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </redirdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <channel supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </channel>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <crypto supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </crypto>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <interface supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>passt</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </interface>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <panic supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>isa</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>hyperv</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </panic>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <console supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>null</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dev</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pipe</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stdio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>udp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tcp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu-vdagent</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </console>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <gic supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <genid supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backup supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <async-teardown supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <ps2 supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sev supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sgx supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hyperv supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='features'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>relaxed</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vapic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>spinlocks</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vpindex</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>runtime</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>synic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stimer</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reset</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vendor_id</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>frequencies</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reenlightenment</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tlbflush</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ipi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>avic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emsr_bitmap</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>xmm_input</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hyperv>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <launchSecurity supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='sectype'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tdx</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </launchSecurity>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: </domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:19.977 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 01:43:20 np0005539508 nova_compute[251877]: <domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <domain>kvm</domain>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <arch>i686</arch>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <vcpu max='240'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <iothreads supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <os supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='firmware'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <loader supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>rom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pflash</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='readonly'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>yes</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='secure'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </loader>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </os>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='maximumMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <vendor>AMD</vendor>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='succor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='custom' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-128'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-256'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-512'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <memoryBacking supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='sourceType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>anonymous</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>memfd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </memoryBacking>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <disk supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='diskDevice'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>disk</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cdrom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>floppy</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>lun</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ide</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>fdc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>sata</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </disk>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <graphics supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vnc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egl-headless</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </graphics>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <video supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='modelType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vga</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cirrus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>none</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>bochs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ramfb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </video>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hostdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='mode'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>subsystem</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='startupPolicy'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>mandatory</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>requisite</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>optional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='subsysType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pci</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='capsType'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='pciBackend'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hostdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <rng supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>random</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </rng>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <filesystem supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='driverType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>path</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>handle</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtiofs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </filesystem>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <tpm supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-tis</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-crb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emulator</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>external</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendVersion'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>2.0</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </tpm>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <redirdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </redirdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <channel supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </channel>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <crypto supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </crypto>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <interface supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>passt</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </interface>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <panic supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>isa</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>hyperv</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </panic>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <console supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>null</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dev</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pipe</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stdio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>udp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tcp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu-vdagent</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </console>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <gic supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <genid supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backup supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <async-teardown supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <ps2 supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sev supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sgx supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hyperv supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='features'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>relaxed</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vapic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>spinlocks</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vpindex</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>runtime</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>synic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stimer</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reset</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vendor_id</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>frequencies</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reenlightenment</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tlbflush</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ipi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>avic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emsr_bitmap</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>xmm_input</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hyperv>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <launchSecurity supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='sectype'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tdx</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </launchSecurity>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: </domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.005 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.010 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 01:43:20 np0005539508 nova_compute[251877]: <domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <domain>kvm</domain>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <arch>x86_64</arch>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <vcpu max='4096'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <iothreads supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <os supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='firmware'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>efi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <loader supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>rom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pflash</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='readonly'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>yes</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='secure'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>yes</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </loader>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </os>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='maximumMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <vendor>AMD</vendor>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='succor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='custom' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-128'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-256'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-512'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <memoryBacking supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='sourceType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>anonymous</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>memfd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </memoryBacking>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <disk supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='diskDevice'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>disk</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cdrom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>floppy</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>lun</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>fdc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>sata</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </disk>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <graphics supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vnc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egl-headless</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </graphics>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <video supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='modelType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vga</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cirrus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>none</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>bochs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ramfb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </video>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hostdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='mode'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>subsystem</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='startupPolicy'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>mandatory</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>requisite</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>optional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='subsysType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pci</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='capsType'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='pciBackend'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hostdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <rng supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>random</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </rng>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <filesystem supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='driverType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>path</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>handle</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtiofs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </filesystem>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <tpm supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-tis</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-crb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emulator</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>external</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendVersion'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>2.0</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </tpm>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <redirdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </redirdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <channel supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </channel>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <crypto supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </crypto>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <interface supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>passt</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </interface>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <panic supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>isa</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>hyperv</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </panic>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <console supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>null</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dev</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pipe</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stdio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>udp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tcp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu-vdagent</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </console>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <gic supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <genid supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backup supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <async-teardown supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <ps2 supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sev supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sgx supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hyperv supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='features'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>relaxed</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vapic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>spinlocks</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vpindex</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>runtime</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>synic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stimer</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reset</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vendor_id</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>frequencies</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reenlightenment</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tlbflush</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ipi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>avic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emsr_bitmap</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>xmm_input</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hyperv>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <launchSecurity supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='sectype'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tdx</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </launchSecurity>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: </domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.076 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 01:43:20 np0005539508 nova_compute[251877]: <domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <domain>kvm</domain>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <arch>x86_64</arch>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <vcpu max='240'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <iothreads supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <os supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='firmware'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <loader supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>rom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pflash</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='readonly'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>yes</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='secure'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>no</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </loader>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </os>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-passthrough' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='hostPassthroughMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='maximum' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='maximumMigratable'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>on</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>off</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='host-model' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <vendor>AMD</vendor>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='x2apic'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='hypervisor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='stibp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='overflow-recov'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='succor'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lbrv'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='tsc-scale'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='flushbyasid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pause-filter'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='pfthreshold'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <feature policy='disable' name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <mode name='custom' supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Broadwell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Cooperlake-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Denverton-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Dhyana-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='auto-ibrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Milan-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amd-psfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='no-nested-data-bp'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='null-sel-clr-base'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='stibp-always-on'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-Rome-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='EPYC-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='GraniteRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-128'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-256'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx10-512'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='prefetchiti'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Haswell-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v6'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Icelake-Server-v7'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='IvyBridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='KnightsMill-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4fmaps'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-4vnniw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512er'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512pf'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G4-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Opteron_G5-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fma4'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tbm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xop'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SapphireRapids-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='amx-tile'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-bf16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-fp16'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512-vpopcntdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bitalg'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vbmi2'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrc'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fzrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='la57'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='taa-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='tsx-ldtrk'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xfd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='SierraForest-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ifma'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-ne-convert'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx-vnni-int8'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='bus-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cmpccxadd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fbsdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='fsrs'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ibrs-all'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mcdt-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pbrsb-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='psdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='sbdr-ssdp-no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='serialize'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vaes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='vpclmulqdq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Client-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='hle'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='rtm'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Skylake-Server-v5'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512bw'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512cd'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512dq'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512f'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='avx512vl'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='invpcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pcid'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='pku'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='mpx'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v2'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v3'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='core-capability'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='split-lock-detect'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='Snowridge-v4'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='cldemote'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='erms'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='gfni'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdir64b'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='movdiri'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='xsaves'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='athlon-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='core2duo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='coreduo-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='n270-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='ss'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <blockers model='phenom-v1'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnow'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <feature name='3dnowext'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </blockers>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </mode>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <memoryBacking supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <enum name='sourceType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>anonymous</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <value>memfd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </memoryBacking>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <disk supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='diskDevice'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>disk</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cdrom</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>floppy</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>lun</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ide</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>fdc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>sata</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </disk>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <graphics supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vnc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egl-headless</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </graphics>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <video supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='modelType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vga</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>cirrus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>none</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>bochs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ramfb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </video>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hostdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='mode'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>subsystem</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='startupPolicy'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>mandatory</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>requisite</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>optional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='subsysType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pci</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>scsi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='capsType'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='pciBackend'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hostdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <rng supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtio-non-transitional</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>random</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>egd</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </rng>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <filesystem supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='driverType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>path</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>handle</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>virtiofs</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </filesystem>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <tpm supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-tis</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tpm-crb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emulator</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>external</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendVersion'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>2.0</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </tpm>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <redirdev supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='bus'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>usb</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </redirdev>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <channel supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </channel>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <crypto supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendModel'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>builtin</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </crypto>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <interface supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='backendType'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>default</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>passt</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </interface>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <panic supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='model'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>isa</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>hyperv</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </panic>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <console supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='type'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>null</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vc</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pty</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dev</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>file</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>pipe</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stdio</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>udp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tcp</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>unix</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>qemu-vdagent</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>dbus</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </console>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </devices>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <gic supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <vmcoreinfo supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <genid supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backingStoreInput supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <backup supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <async-teardown supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <ps2 supported='yes'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sev supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <sgx supported='no'/>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <hyperv supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='features'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>relaxed</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vapic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>spinlocks</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vpindex</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>runtime</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>synic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>stimer</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reset</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>vendor_id</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>frequencies</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>reenlightenment</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tlbflush</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>ipi</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>avic</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>emsr_bitmap</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>xmm_input</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <spinlocks>4095</spinlocks>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <stimer_direct>on</stimer_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </defaults>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </hyperv>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    <launchSecurity supported='yes'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      <enum name='sectype'>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:        <value>tdx</value>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:      </enum>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:    </launchSecurity>
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  </features>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: </domainCapabilities>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.143 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.144 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Secure Boot support detected#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.146 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.146 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.156 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 01:43:20 np0005539508 nova_compute[251877]:  <model>Nehalem</model>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: </cpu>
Nov 29 01:43:20 np0005539508 nova_compute[251877]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.159 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 01:43:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:20.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.441 251881 DEBUG nova.virt.libvirt.volume.mount [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 01:43:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:20 np0005539508 nova_compute[251877]: 2025-11-29 06:43:20.765 251881 INFO nova.virt.node [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id#033[00m
Nov 29 01:43:21 np0005539508 nova_compute[251877]: 2025-11-29 06:43:21.394 251881 WARNING nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute nodes ['36ed0248-8d04-4532-95bb-daab89f12202'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 01:43:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:22 np0005539508 nova_compute[251877]: 2025-11-29 06:43:22.537 251881 INFO nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 01:43:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 WARNING nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.066 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.066 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:43:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000057s ======
Nov 29 01:43:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Nov 29 01:43:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:43:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766159143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.518 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:43:24 np0005539508 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 01:43:24 np0005539508 systemd[1]: Started libvirt nodedev daemon.
Nov 29 01:43:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.991 251881 WARNING nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.993 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5211MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.994 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:43:24 np0005539508 nova_compute[251877]: 2025-11-29 06:43:24.994 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.157 251881 WARNING nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] No compute node record for compute-0.ctlplane.example.com:36ed0248-8d04-4532-95bb-daab89f12202: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 36ed0248-8d04-4532-95bb-daab89f12202 could not be found.#033[00m
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.254 251881 INFO nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 36ed0248-8d04-4532-95bb-daab89f12202#033[00m
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.579 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.580 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:43:25 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:43:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:25.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.876 251881 INFO nova.scheduler.client.report [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] [req-06598c54-fb62-4044-8dcf-489128907ffe] Created resource provider record via placement API for resource provider with UUID 36ed0248-8d04-4532-95bb-daab89f12202 and name compute-0.ctlplane.example.com.#033[00m
Nov 29 01:43:25 np0005539508 nova_compute[251877]: 2025-11-29 06:43:25.950 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:43:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:43:26 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410258570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.575 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.583 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 01:43:26 np0005539508 nova_compute[251877]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.583 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.584 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.585 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.588 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 01:43:26 np0005539508 nova_compute[251877]:  <arch>x86_64</arch>
Nov 29 01:43:26 np0005539508 nova_compute[251877]:  <model>Nehalem</model>
Nov 29 01:43:26 np0005539508 nova_compute[251877]:  <vendor>AMD</vendor>
Nov 29 01:43:26 np0005539508 nova_compute[251877]:  <topology sockets="8" cores="1" threads="1"/>
Nov 29 01:43:26 np0005539508 nova_compute[251877]: </cpu>
Nov 29 01:43:26 np0005539508 nova_compute[251877]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Nov 29 01:43:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.766 251881 DEBUG nova.scheduler.client.report [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updated inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.767 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating resource provider 36ed0248-8d04-4532-95bb-daab89f12202 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 01:43:26 np0005539508 nova_compute[251877]: 2025-11-29 06:43:26.767 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 01:43:27 np0005539508 nova_compute[251877]: 2025-11-29 06:43:27.041 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating resource provider 36ed0248-8d04-4532-95bb-daab89f12202 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 01:43:27 np0005539508 nova_compute[251877]: 2025-11-29 06:43:27.405 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:43:27 np0005539508 nova_compute[251877]: 2025-11-29 06:43:27.406 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:43:27 np0005539508 nova_compute[251877]: 2025-11-29 06:43:27.406 251881 DEBUG nova.service [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 01:43:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:28 np0005539508 nova_compute[251877]: 2025-11-29 06:43:28.046 251881 DEBUG nova.service [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 01:43:28 np0005539508 nova_compute[251877]: 2025-11-29 06:43:28.047 251881 DEBUG nova.servicegroup.drivers.db [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 01:43:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:28.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:29 np0005539508 podman[252309]: 2025-11-29 06:43:29.173956551 +0000 UTC m=+0.134310750 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:43:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:43:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:43:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:43:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:31.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:33.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:34.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:36.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:38 np0005539508 podman[252387]: 2025-11-29 06:43:38.123130462 +0000 UTC m=+0.076179190 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 01:43:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:38.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:40.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:41 np0005539508 podman[252408]: 2025-11-29 06:43:41.127366231 +0000 UTC m=+0.099638393 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:43:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:42.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:43:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:43:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:44.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:43:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:43:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:43:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:43:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:43:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:47:49 np0005539508 rsyslogd[1007]: imjournal: 2575 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 01:47:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:47:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:47:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:47:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:47:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:47:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:52.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:47:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:47:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:47:54
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.rgw.root']
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:47:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:47:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:47:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:47:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.074075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877074203, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1681, "num_deletes": 251, "total_data_size": 3039443, "memory_usage": 3089224, "flush_reason": "Manual Compaction"}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877151799, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1737351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18090, "largest_seqno": 19769, "table_properties": {"data_size": 1731823, "index_size": 2668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14351, "raw_average_key_size": 20, "raw_value_size": 1719513, "raw_average_value_size": 2428, "num_data_blocks": 123, "num_entries": 708, "num_filter_entries": 708, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398703, "oldest_key_time": 1764398703, "file_creation_time": 1764398877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 77870 microseconds, and 9912 cpu microseconds.
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.151953) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1737351 bytes OK
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.151986) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.154962) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.154995) EVENT_LOG_v1 {"time_micros": 1764398877154986, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.155021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3032417, prev total WAL file size 3032417, number of live WAL files 2.
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.156715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1696KB)], [41(9238KB)]
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877156799, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11197876, "oldest_snapshot_seqno": -1}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4809 keys, 8585958 bytes, temperature: kUnknown
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877271951, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 8585958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8553778, "index_size": 19078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 121166, "raw_average_key_size": 25, "raw_value_size": 8466674, "raw_average_value_size": 1760, "num_data_blocks": 783, "num_entries": 4809, "num_filter_entries": 4809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.272302) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 8585958 bytes
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.299866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.1 rd, 74.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.0 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.4) write-amplify(4.9) OK, records in: 5243, records dropped: 434 output_compression: NoCompression
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.299942) EVENT_LOG_v1 {"time_micros": 1764398877299927, "job": 20, "event": "compaction_finished", "compaction_time_micros": 115286, "compaction_time_cpu_micros": 43319, "output_level": 6, "num_output_files": 1, "total_output_size": 8585958, "num_input_records": 5243, "num_output_records": 4809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877301066, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877304870, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.156569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:57 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:47:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:47:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:47:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:47:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:47:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:47:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:02.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:04.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:05 np0005539508 podman[257544]: 2025-11-29 06:48:05.136819385 +0000 UTC m=+0.095521699 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 01:48:06 np0005539508 nova_compute[251877]: 2025-11-29 06:48:06.077 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 15.85 sec#033[00m
Nov 29 01:48:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:06.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:08.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:10.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:12 np0005539508 nova_compute[251877]: 2025-11-29 06:48:12.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:48:12 np0005539508 nova_compute[251877]: 2025-11-29 06:48:12.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:48:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:48:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:16 np0005539508 podman[257627]: 2025-11-29 06:48:16.121728388 +0000 UTC m=+0.080435430 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 01:48:16 np0005539508 podman[257628]: 2025-11-29 06:48:16.19007956 +0000 UTC m=+0.148211416 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 01:48:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:16.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.237 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:48:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:48:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:48:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:18.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:20 np0005539508 nova_compute[251877]: 2025-11-29 06:48:20.353 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 4.28 sec#033[00m
Nov 29 01:48:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:22.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:24.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:24.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:28.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:28 np0005539508 nova_compute[251877]: 2025-11-29 06:48:28.745 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 01:48:28 np0005539508 nova_compute[251877]: 2025-11-29 06:48:28.748 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:48:28 np0005539508 nova_compute[251877]: 2025-11-29 06:48:28.748 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 01:48:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:48:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:48:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:30.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:30.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:34.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:48:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.8 total, 600.0 interval#012Cumulative writes: 9194 writes, 35K keys, 9194 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9194 writes, 2074 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 682 writes, 1062 keys, 682 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 682 writes, 328 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 01:48:36 np0005539508 podman[257737]: 2025-11-29 06:48:36.128515031 +0000 UTC m=+0.090289733 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:43 np0005539508 podman[257936]: 2025-11-29 06:48:43.674053947 +0000 UTC m=+0.093501783 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:48:43 np0005539508 podman[257936]: 2025-11-29 06:48:43.794639472 +0000 UTC m=+0.214087338 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:48:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:44 np0005539508 podman[258089]: 2025-11-29 06:48:44.567097319 +0000 UTC m=+0.079368929 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:48:44 np0005539508 podman[258089]: 2025-11-29 06:48:44.586298834 +0000 UTC m=+0.098570394 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:48:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:44 np0005539508 podman[258153]: 2025-11-29 06:48:44.90310995 +0000 UTC m=+0.074457543 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Nov 29 01:48:44 np0005539508 podman[258153]: 2025-11-29 06:48:44.916096442 +0000 UTC m=+0.087444045 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Nov 29 01:48:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.074531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925074604, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 654, "num_deletes": 251, "total_data_size": 886958, "memory_usage": 898536, "flush_reason": "Manual Compaction"}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925089483, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 878455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19770, "largest_seqno": 20423, "table_properties": {"data_size": 874937, "index_size": 1426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7933, "raw_average_key_size": 19, "raw_value_size": 867897, "raw_average_value_size": 2121, "num_data_blocks": 62, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398878, "oldest_key_time": 1764398878, "file_creation_time": 1764398925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 15144 microseconds, and 7122 cpu microseconds.
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.089674) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 878455 bytes OK
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.089768) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091508) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091566) EVENT_LOG_v1 {"time_micros": 1764398925091556, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 883557, prev total WAL file size 883557, number of live WAL files 2.
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.092951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(857KB)], [44(8384KB)]
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925093031, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 9464413, "oldest_snapshot_seqno": -1}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4700 keys, 7370361 bytes, temperature: kUnknown
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925159843, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7370361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7339958, "index_size": 17557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 119472, "raw_average_key_size": 25, "raw_value_size": 7255731, "raw_average_value_size": 1543, "num_data_blocks": 714, "num_entries": 4700, "num_filter_entries": 4700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.160179) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7370361 bytes
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.161958) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.4 rd, 110.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.2 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(19.2) write-amplify(8.4) OK, records in: 5218, records dropped: 518 output_compression: NoCompression
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.161984) EVENT_LOG_v1 {"time_micros": 1764398925161972, "job": 22, "event": "compaction_finished", "compaction_time_micros": 66937, "compaction_time_cpu_micros": 31632, "output_level": 6, "num_output_files": 1, "total_output_size": 7370361, "num_input_records": 5218, "num_output_records": 4700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925163064, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925166350, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.092836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:48:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:46 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:46.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:47 np0005539508 podman[258321]: 2025-11-29 06:48:47.152742507 +0000 UTC m=+0.108422869 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 01:48:47 np0005539508 podman[258322]: 2025-11-29 06:48:47.184285545 +0000 UTC m=+0.131461320 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 01:48:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:48.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:48.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:48:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev e83f8407-e815-446f-bb63-4f55fa7fa9a2 does not exist
Nov 29 01:48:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f6d336ef-eff9-4688-9010-6a487937a273 does not exist
Nov 29 01:48:50 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 38e04522-70ed-4a3a-9b0b-df93e09605fa does not exist
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:48:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:50.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:50 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:48:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:50 np0005539508 nova_compute[251877]: 2025-11-29 06:48:50.976 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 20.62 sec#033[00m
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.042442875 +0000 UTC m=+0.127681945 container create dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:50.959774594 +0000 UTC m=+0.045013724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:51 np0005539508 nova_compute[251877]: 2025-11-29 06:48:51.092 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:48:51 np0005539508 systemd[1]: Started libpod-conmon-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope.
Nov 29 01:48:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.341114876 +0000 UTC m=+0.426353966 container init dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.355052244 +0000 UTC m=+0.440291304 container start dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.363404207 +0000 UTC m=+0.448643277 container attach dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 01:48:51 np0005539508 sweet_brattain[258523]: 167 167
Nov 29 01:48:51 np0005539508 systemd[1]: libpod-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope: Deactivated successfully.
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.365657949 +0000 UTC m=+0.450897019 container died dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:48:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:51 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:48:51 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d88ddd02126c24bce52bf816074b4a7a73eeb245481dac60f40f87087ef9d0d6-merged.mount: Deactivated successfully.
Nov 29 01:48:51 np0005539508 podman[258505]: 2025-11-29 06:48:51.808309778 +0000 UTC m=+0.893548848 container remove dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:51 np0005539508 systemd[1]: libpod-conmon-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope: Deactivated successfully.
Nov 29 01:48:52 np0005539508 podman[258549]: 2025-11-29 06:48:52.051486355 +0000 UTC m=+0.052003388 container create a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:48:52 np0005539508 systemd[1]: Started libpod-conmon-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope.
Nov 29 01:48:52 np0005539508 podman[258549]: 2025-11-29 06:48:52.024860124 +0000 UTC m=+0.025377257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:52 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:52 np0005539508 podman[258549]: 2025-11-29 06:48:52.168481631 +0000 UTC m=+0.168998694 container init a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 01:48:52 np0005539508 podman[258549]: 2025-11-29 06:48:52.181183905 +0000 UTC m=+0.181700938 container start a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:48:52 np0005539508 podman[258549]: 2025-11-29 06:48:52.192929522 +0000 UTC m=+0.193446575 container attach a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:48:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:52.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:48:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:52.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:48:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:53 np0005539508 musing_archimedes[258566]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:48:53 np0005539508 musing_archimedes[258566]: --> relative data size: 1.0
Nov 29 01:48:53 np0005539508 musing_archimedes[258566]: --> All data devices are unavailable
Nov 29 01:48:53 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 01:48:53 np0005539508 systemd[1]: libpod-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope: Deactivated successfully.
Nov 29 01:48:53 np0005539508 podman[258549]: 2025-11-29 06:48:53.066628026 +0000 UTC m=+1.067145119 container died a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:48:53 np0005539508 systemd[1]: var-lib-containers-storage-overlay-23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732-merged.mount: Deactivated successfully.
Nov 29 01:48:53 np0005539508 podman[258549]: 2025-11-29 06:48:53.17816519 +0000 UTC m=+1.178682263 container remove a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:48:53 np0005539508 systemd[1]: libpod-conmon-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope: Deactivated successfully.
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.025953044 +0000 UTC m=+0.051588147 container create 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:48:54 np0005539508 systemd[1]: Started libpod-conmon-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope.
Nov 29 01:48:54 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.002060719 +0000 UTC m=+0.027695852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.111496814 +0000 UTC m=+0.137131957 container init 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.119296822 +0000 UTC m=+0.144931925 container start 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:48:54 np0005539508 fervent_allen[258755]: 167 167
Nov 29 01:48:54 np0005539508 systemd[1]: libpod-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope: Deactivated successfully.
Nov 29 01:48:54 np0005539508 conmon[258755]: conmon 16532a9922f9d6504c35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope/container/memory.events
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.129402543 +0000 UTC m=+0.155037706 container attach 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.130297638 +0000 UTC m=+0.155932751 container died 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 01:48:54 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3d932d24ecd7c498135bee4c8885b58a8c4ceba49c219e2a1ed32429d86f27fa-merged.mount: Deactivated successfully.
Nov 29 01:48:54 np0005539508 podman[258739]: 2025-11-29 06:48:54.229645443 +0000 UTC m=+0.255280586 container remove 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 01:48:54 np0005539508 systemd[1]: libpod-conmon-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope: Deactivated successfully.
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:48:54
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:48:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:54.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:48:54 np0005539508 podman[258781]: 2025-11-29 06:48:54.436470047 +0000 UTC m=+0.074815922 container create 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:48:54 np0005539508 podman[258781]: 2025-11-29 06:48:54.40778311 +0000 UTC m=+0.046129065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:54 np0005539508 systemd[1]: Started libpod-conmon-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope.
Nov 29 01:48:54 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:54 np0005539508 podman[258781]: 2025-11-29 06:48:54.582502371 +0000 UTC m=+0.220848256 container init 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:48:54 np0005539508 podman[258781]: 2025-11-29 06:48:54.590122303 +0000 UTC m=+0.228468168 container start 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:48:54 np0005539508 podman[258781]: 2025-11-29 06:48:54.595175044 +0000 UTC m=+0.233520899 container attach 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:48:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:55 np0005539508 exciting_benz[258797]: {
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:    "1": [
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:        {
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "devices": [
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "/dev/loop3"
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            ],
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "lv_name": "ceph_lv0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "lv_size": "7511998464",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "name": "ceph_lv0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "tags": {
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.cluster_name": "ceph",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.crush_device_class": "",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.encrypted": "0",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.osd_id": "1",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.type": "block",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:                "ceph.vdo": "0"
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            },
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "type": "block",
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:            "vg_name": "ceph_vg0"
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:        }
Nov 29 01:48:55 np0005539508 exciting_benz[258797]:    ]
Nov 29 01:48:55 np0005539508 exciting_benz[258797]: }
Nov 29 01:48:55 np0005539508 systemd[1]: libpod-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope: Deactivated successfully.
Nov 29 01:48:55 np0005539508 podman[258857]: 2025-11-29 06:48:55.376350834 +0000 UTC m=+0.031781596 container died 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:48:55 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1-merged.mount: Deactivated successfully.
Nov 29 01:48:56 np0005539508 podman[258857]: 2025-11-29 06:48:56.187774465 +0000 UTC m=+0.843205267 container remove 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:48:56 np0005539508 systemd[1]: libpod-conmon-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope: Deactivated successfully.
Nov 29 01:48:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:56 np0005539508 podman[259012]: 2025-11-29 06:48:56.979470188 +0000 UTC m=+0.060537306 container create d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:48:57 np0005539508 systemd[1]: Started libpod-conmon-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope.
Nov 29 01:48:57 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:56.962009412 +0000 UTC m=+0.043076560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:57.063339872 +0000 UTC m=+0.144407080 container init d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:57.072198568 +0000 UTC m=+0.153265676 container start d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:57.075739457 +0000 UTC m=+0.156806615 container attach d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:48:57 np0005539508 frosty_pare[259028]: 167 167
Nov 29 01:48:57 np0005539508 systemd[1]: libpod-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope: Deactivated successfully.
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:57.078773831 +0000 UTC m=+0.159840949 container died d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:48:57 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5cde43ffdbef4d88360d42698e3934f180209c2bbca46d1474ad175f165c9ffa-merged.mount: Deactivated successfully.
Nov 29 01:48:57 np0005539508 podman[259012]: 2025-11-29 06:48:57.128285539 +0000 UTC m=+0.209352687 container remove d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 01:48:57 np0005539508 systemd[1]: libpod-conmon-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope: Deactivated successfully.
Nov 29 01:48:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:48:57 np0005539508 podman[259053]: 2025-11-29 06:48:57.361399597 +0000 UTC m=+0.065690200 container create bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 01:48:57 np0005539508 systemd[1]: Started libpod-conmon-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope.
Nov 29 01:48:57 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:48:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:57 np0005539508 podman[259053]: 2025-11-29 06:48:57.336172825 +0000 UTC m=+0.040463428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:48:57 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:48:57 np0005539508 podman[259053]: 2025-11-29 06:48:57.456294677 +0000 UTC m=+0.160585280 container init bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:48:57 np0005539508 podman[259053]: 2025-11-29 06:48:57.468286901 +0000 UTC m=+0.172577474 container start bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 01:48:57 np0005539508 podman[259053]: 2025-11-29 06:48:57.474149594 +0000 UTC m=+0.178440267 container attach bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:48:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:48:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]: {
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:        "osd_id": 1,
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:        "type": "bluestore"
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]:    }
Nov 29 01:48:58 np0005539508 jolly_chaplygin[259069]: }
Nov 29 01:48:58 np0005539508 systemd[1]: libpod-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope: Deactivated successfully.
Nov 29 01:48:58 np0005539508 podman[259053]: 2025-11-29 06:48:58.370018265 +0000 UTC m=+1.074308898 container died bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:48:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f-merged.mount: Deactivated successfully.
Nov 29 01:48:58 np0005539508 podman[259053]: 2025-11-29 06:48:58.434424547 +0000 UTC m=+1.138715110 container remove bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:48:58 np0005539508 systemd[1]: libpod-conmon-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope: Deactivated successfully.
Nov 29 01:48:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:48:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:48:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f962f9a4-6523-4bf5-b821-10feb7e4d907 does not exist
Nov 29 01:48:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 442fb065-95d6-436c-859a-cc8110335ca4 does not exist
Nov 29 01:48:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev a3f1ca67-ecb1-42f4-8c7f-f0adb95cfd82 does not exist
Nov 29 01:48:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:48:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:48:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:58.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:48:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:48:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:48:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:49:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:00.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:00.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:02.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:02 np0005539508 nova_compute[251877]: 2025-11-29 06:49:02.827 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:02 np0005539508 nova_compute[251877]: 2025-11-29 06:49:02.828 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:02 np0005539508 nova_compute[251877]: 2025-11-29 06:49:02.829 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:49:02 np0005539508 nova_compute[251877]: 2025-11-29 06:49:02.829 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:49:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:04.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:06.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:07 np0005539508 podman[259162]: 2025-11-29 06:49:07.106388409 +0000 UTC m=+0.071036408 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 29 01:49:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:10.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:49:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:49:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:14.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:14.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:49:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:49:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:49:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:18 np0005539508 podman[259240]: 2025-11-29 06:49:18.157596916 +0000 UTC m=+0.111135594 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:49:18 np0005539508 podman[259241]: 2025-11-29 06:49:18.168224582 +0000 UTC m=+0.122294675 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:49:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:49:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:49:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:26.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 01:49:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 01:49:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:28.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:49:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:49:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:49:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:49:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.163 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:49:34 np0005539508 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:49:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:34.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:35 np0005539508 nova_compute[251877]: 2025-11-29 06:49:35.003 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 24.02 sec#033[00m
Nov 29 01:49:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:36.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:37 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:38 np0005539508 podman[259356]: 2025-11-29 06:49:38.15121844 +0000 UTC m=+0.100701023 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 01:49:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:42.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:42 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:49 np0005539508 podman[259383]: 2025-11-29 06:49:49.121536966 +0000 UTC m=+0.086897099 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:49:49 np0005539508 podman[259384]: 2025-11-29 06:49:49.176079914 +0000 UTC m=+0.133738493 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:49:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:49:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:50.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:49:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.121 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.123 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:49:51 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:49:51 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336302150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.574 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.743 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.744 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.745 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:49:51 np0005539508 nova_compute[251877]: 2025-11-29 06:49:51.745 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:49:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:52.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:53 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:49:54
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', '.rgw.root', 'backups']
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:49:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:49:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:49:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:57 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:49:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:49:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:49:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:49:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:49:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:49:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:49:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 01:50:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:00 np0005539508 ceph-mon[74654]: overall HEALTH_OK
Nov 29 01:50:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:01 np0005539508 podman[259908]: 2025-11-29 06:50:01.254105835 +0000 UTC m=+0.018515706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:01 np0005539508 podman[259908]: 2025-11-29 06:50:01.591692911 +0000 UTC m=+0.356102792 container create 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 01:50:02 np0005539508 systemd[1]: Started libpod-conmon-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope.
Nov 29 01:50:02 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:02 np0005539508 nova_compute[251877]: 2025-11-29 06:50:02.209 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 17.21 sec#033[00m
Nov 29 01:50:02 np0005539508 podman[259908]: 2025-11-29 06:50:02.386313994 +0000 UTC m=+1.150723945 container init 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:50:02 np0005539508 podman[259908]: 2025-11-29 06:50:02.402022591 +0000 UTC m=+1.166432482 container start 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 01:50:02 np0005539508 systemd[1]: libpod-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope: Deactivated successfully.
Nov 29 01:50:02 np0005539508 compassionate_meninsky[259924]: 167 167
Nov 29 01:50:02 np0005539508 conmon[259924]: conmon 3bfe9fd9a7f7924df2f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope/container/memory.events
Nov 29 01:50:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:02 np0005539508 podman[259908]: 2025-11-29 06:50:02.415961239 +0000 UTC m=+1.180371130 container attach 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:50:02 np0005539508 podman[259908]: 2025-11-29 06:50:02.416476044 +0000 UTC m=+1.180885895 container died 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:02 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b88b283df89e45305ff495b39de5ecab2febb386624824f3a0b6bf87dca9414c-merged.mount: Deactivated successfully.
Nov 29 01:50:02 np0005539508 podman[259908]: 2025-11-29 06:50:02.465985281 +0000 UTC m=+1.230395122 container remove 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:50:02 np0005539508 systemd[1]: libpod-conmon-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope: Deactivated successfully.
Nov 29 01:50:02 np0005539508 podman[259948]: 2025-11-29 06:50:02.627468455 +0000 UTC m=+0.040899430 container create 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:50:02 np0005539508 systemd[1]: Started libpod-conmon-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope.
Nov 29 01:50:02 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:02 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:02 np0005539508 podman[259948]: 2025-11-29 06:50:02.688911854 +0000 UTC m=+0.102342849 container init 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 01:50:02 np0005539508 podman[259948]: 2025-11-29 06:50:02.696956548 +0000 UTC m=+0.110387523 container start 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:50:02 np0005539508 podman[259948]: 2025-11-29 06:50:02.700794945 +0000 UTC m=+0.114225910 container attach 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:02 np0005539508 podman[259948]: 2025-11-29 06:50:02.611724336 +0000 UTC m=+0.025155341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:02.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:02 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:02 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]: [
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:    {
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "available": false,
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "ceph_device": false,
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "lsm_data": {},
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "lvs": [],
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "path": "/dev/sr0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "rejected_reasons": [
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "Insufficient space (<5GB)",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "Has a FileSystem"
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        ],
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        "sys_api": {
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "actuators": null,
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "device_nodes": "sr0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "devname": "sr0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "human_readable_size": "482.00 KB",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "id_bus": "ata",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "model": "QEMU DVD-ROM",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "nr_requests": "2",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "parent": "/dev/sr0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "partitions": {},
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "path": "/dev/sr0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "removable": "1",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "rev": "2.5+",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "ro": "0",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "rotational": "1",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "sas_address": "",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "sas_device_handle": "",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "scheduler_mode": "mq-deadline",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "sectors": 0,
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "sectorsize": "2048",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "size": 493568.0,
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "support_discard": "2048",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "type": "disk",
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:            "vendor": "QEMU"
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:        }
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]:    }
Nov 29 01:50:03 np0005539508 elastic_varahamihira[259964]: ]
Nov 29 01:50:03 np0005539508 systemd[1]: libpod-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Deactivated successfully.
Nov 29 01:50:03 np0005539508 systemd[1]: libpod-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Consumed 1.208s CPU time.
Nov 29 01:50:03 np0005539508 podman[259948]: 2025-11-29 06:50:03.887561632 +0000 UTC m=+1.300992697 container died 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:50:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:04 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:05 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07-merged.mount: Deactivated successfully.
Nov 29 01:50:06 np0005539508 podman[259948]: 2025-11-29 06:50:06.112100509 +0000 UTC m=+3.525531514 container remove 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:50:06 np0005539508 systemd[1]: libpod-conmon-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Deactivated successfully.
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:50:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:50:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev c586e1d7-4cfa-46ed-8395-a440055d9e82 does not exist
Nov 29 01:50:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 80a10c9d-7bee-4824-9555-5b2bac572609 does not exist
Nov 29 01:50:06 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5949d3f3-a967-415d-aaa7-97dbff98474f does not exist
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:50:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:50:06 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.552107343 +0000 UTC m=+0.078095434 container create c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.497086912 +0000 UTC m=+0.023075063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:07 np0005539508 systemd[1]: Started libpod-conmon-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope.
Nov 29 01:50:07 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.657094275 +0000 UTC m=+0.183082416 container init c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.66841033 +0000 UTC m=+0.194398431 container start c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.672485433 +0000 UTC m=+0.198473534 container attach c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:50:07 np0005539508 romantic_torvalds[261242]: 167 167
Nov 29 01:50:07 np0005539508 systemd[1]: libpod-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope: Deactivated successfully.
Nov 29 01:50:07 np0005539508 conmon[261242]: conmon c8ba3cdd2717fd4dd0ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope/container/memory.events
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.676870895 +0000 UTC m=+0.202858966 container died c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:50:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-2951d8c66507f64fcc8fafe3ae8d7d53470ec114c7db00fd0df73eb1d1c3f0e0-merged.mount: Deactivated successfully.
Nov 29 01:50:07 np0005539508 podman[261226]: 2025-11-29 06:50:07.78917051 +0000 UTC m=+0.315158611 container remove c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:50:07 np0005539508 systemd[1]: libpod-conmon-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope: Deactivated successfully.
Nov 29 01:50:07 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:07 np0005539508 podman[261266]: 2025-11-29 06:50:07.962104043 +0000 UTC m=+0.044010256 container create c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:50:07 np0005539508 systemd[1]: Started libpod-conmon-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope.
Nov 29 01:50:08 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:08 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:08 np0005539508 podman[261266]: 2025-11-29 06:50:07.94440412 +0000 UTC m=+0.026310243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:08 np0005539508 podman[261266]: 2025-11-29 06:50:08.043987082 +0000 UTC m=+0.125893215 container init c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:50:08 np0005539508 podman[261266]: 2025-11-29 06:50:08.051033018 +0000 UTC m=+0.132939131 container start c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:50:08 np0005539508 podman[261266]: 2025-11-29 06:50:08.054385931 +0000 UTC m=+0.136292074 container attach c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:50:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:08.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:08 np0005539508 thirsty_mcnulty[261283]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:50:08 np0005539508 thirsty_mcnulty[261283]: --> relative data size: 1.0
Nov 29 01:50:08 np0005539508 thirsty_mcnulty[261283]: --> All data devices are unavailable
Nov 29 01:50:08 np0005539508 systemd[1]: libpod-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope: Deactivated successfully.
Nov 29 01:50:08 np0005539508 podman[261266]: 2025-11-29 06:50:08.946411086 +0000 UTC m=+1.028317249 container died c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:50:08 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:09 np0005539508 systemd[1]: var-lib-containers-storage-overlay-3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae-merged.mount: Deactivated successfully.
Nov 29 01:50:10 np0005539508 podman[261266]: 2025-11-29 06:50:10.110591083 +0000 UTC m=+2.192497196 container remove c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 01:50:10 np0005539508 podman[261299]: 2025-11-29 06:50:10.202369127 +0000 UTC m=+1.229761303 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:50:10 np0005539508 systemd[1]: libpod-conmon-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope: Deactivated successfully.
Nov 29 01:50:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:10.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:10.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:10 np0005539508 podman[261472]: 2025-11-29 06:50:10.800394369 +0000 UTC m=+0.023179206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:10 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:11 np0005539508 podman[261472]: 2025-11-29 06:50:11.035868042 +0000 UTC m=+0.258652819 container create 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:50:11 np0005539508 systemd[1]: Started libpod-conmon-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope.
Nov 29 01:50:11 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:11 np0005539508 podman[261472]: 2025-11-29 06:50:11.745204003 +0000 UTC m=+0.967988870 container init 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:50:11 np0005539508 podman[261472]: 2025-11-29 06:50:11.756104386 +0000 UTC m=+0.978889193 container start 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 01:50:11 np0005539508 awesome_noether[261490]: 167 167
Nov 29 01:50:11 np0005539508 systemd[1]: libpod-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope: Deactivated successfully.
Nov 29 01:50:11 np0005539508 podman[261472]: 2025-11-29 06:50:11.998484842 +0000 UTC m=+1.221269689 container attach 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 01:50:12 np0005539508 podman[261472]: 2025-11-29 06:50:11.99913578 +0000 UTC m=+1.221920567 container died 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:50:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:12 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b1368f4007badfc1dbafd8b52358861f7b40315c33461021e0be2ab92c2c0c91-merged.mount: Deactivated successfully.
Nov 29 01:50:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:12 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:12 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:50:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:50:13 np0005539508 podman[261472]: 2025-11-29 06:50:13.21083626 +0000 UTC m=+2.433621077 container remove 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:50:13 np0005539508 systemd[1]: libpod-conmon-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope: Deactivated successfully.
Nov 29 01:50:13 np0005539508 podman[261515]: 2025-11-29 06:50:13.457190396 +0000 UTC m=+0.044240142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:13 np0005539508 podman[261515]: 2025-11-29 06:50:13.690768716 +0000 UTC m=+0.277818422 container create aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 01:50:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:14.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:14 np0005539508 systemd[1]: Started libpod-conmon-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope.
Nov 29 01:50:14 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:14 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:14 np0005539508 podman[261515]: 2025-11-29 06:50:14.667524729 +0000 UTC m=+1.254574485 container init aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:14 np0005539508 podman[261515]: 2025-11-29 06:50:14.676603962 +0000 UTC m=+1.263653658 container start aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:50:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:14 np0005539508 podman[261515]: 2025-11-29 06:50:14.915267794 +0000 UTC m=+1.502317470 container attach aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:50:14 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]: {
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:    "1": [
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:        {
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "devices": [
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "/dev/loop3"
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            ],
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "lv_name": "ceph_lv0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "lv_size": "7511998464",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "name": "ceph_lv0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "tags": {
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.cluster_name": "ceph",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.crush_device_class": "",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.encrypted": "0",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.osd_id": "1",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.type": "block",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:                "ceph.vdo": "0"
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            },
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "type": "block",
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:            "vg_name": "ceph_vg0"
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:        }
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]:    ]
Nov 29 01:50:15 np0005539508 condescending_mendeleev[261532]: }
Nov 29 01:50:15 np0005539508 systemd[1]: libpod-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope: Deactivated successfully.
Nov 29 01:50:15 np0005539508 conmon[261532]: conmon aee34b057a7b9af05c00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope/container/memory.events
Nov 29 01:50:15 np0005539508 podman[261515]: 2025-11-29 06:50:15.471141383 +0000 UTC m=+2.058191089 container died aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:50:15 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4-merged.mount: Deactivated successfully.
Nov 29 01:50:15 np0005539508 podman[261515]: 2025-11-29 06:50:15.863106302 +0000 UTC m=+2.450156008 container remove aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:50:15 np0005539508 systemd[1]: libpod-conmon-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope: Deactivated successfully.
Nov 29 01:50:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:16.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.515641532 +0000 UTC m=+0.046003332 container create bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 01:50:16 np0005539508 systemd[1]: Started libpod-conmon-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope.
Nov 29 01:50:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.49798997 +0000 UTC m=+0.028351790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.598204069 +0000 UTC m=+0.128565889 container init bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.605556594 +0000 UTC m=+0.135918394 container start bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:50:16 np0005539508 zen_northcutt[261761]: 167 167
Nov 29 01:50:16 np0005539508 systemd[1]: libpod-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope: Deactivated successfully.
Nov 29 01:50:16 np0005539508 conmon[261761]: conmon bee74bc97f6963cd17b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope/container/memory.events
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.611658374 +0000 UTC m=+0.142020184 container attach bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.61258978 +0000 UTC m=+0.142951590 container died bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:16 np0005539508 systemd[1]: var-lib-containers-storage-overlay-2197e93cc3399d7c2c3e52a7548394ecc61e07190d6f57ae5a7450dd16d162e3-merged.mount: Deactivated successfully.
Nov 29 01:50:16 np0005539508 podman[261745]: 2025-11-29 06:50:16.647975554 +0000 UTC m=+0.178337354 container remove bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:50:16 np0005539508 systemd[1]: libpod-conmon-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope: Deactivated successfully.
Nov 29 01:50:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:16 np0005539508 podman[261786]: 2025-11-29 06:50:16.840145491 +0000 UTC m=+0.046807873 container create 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:16 np0005539508 systemd[1]: Started libpod-conmon-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope.
Nov 29 01:50:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:50:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:16 np0005539508 podman[261786]: 2025-11-29 06:50:16.821803831 +0000 UTC m=+0.028466263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:50:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:16 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:50:16 np0005539508 podman[261786]: 2025-11-29 06:50:16.926660749 +0000 UTC m=+0.133323131 container init 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 01:50:16 np0005539508 podman[261786]: 2025-11-29 06:50:16.933582862 +0000 UTC m=+0.140245244 container start 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:50:16 np0005539508 podman[261786]: 2025-11-29 06:50:16.937731897 +0000 UTC m=+0.144394279 container attach 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:50:16 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:50:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.241 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:50:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.241 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:50:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:17 np0005539508 elegant_tu[261802]: {
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:        "osd_id": 1,
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:        "type": "bluestore"
Nov 29 01:50:17 np0005539508 elegant_tu[261802]:    }
Nov 29 01:50:17 np0005539508 elegant_tu[261802]: }
Nov 29 01:50:17 np0005539508 systemd[1]: libpod-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope: Deactivated successfully.
Nov 29 01:50:17 np0005539508 podman[261786]: 2025-11-29 06:50:17.879758373 +0000 UTC m=+1.086420785 container died 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:50:18 np0005539508 systemd[1]: var-lib-containers-storage-overlay-89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638-merged.mount: Deactivated successfully.
Nov 29 01:50:18 np0005539508 podman[261786]: 2025-11-29 06:50:18.10816746 +0000 UTC m=+1.314829852 container remove 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 01:50:18 np0005539508 systemd[1]: libpod-conmon-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope: Deactivated successfully.
Nov 29 01:50:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:50:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:50:18 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:18 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9b8cfe2c-a2b7-498e-9126-4e66c5faac47 does not exist
Nov 29 01:50:18 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 2c7f157c-b7db-4ded-beb0-741e69a61a20 does not exist
Nov 29 01:50:18 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev dc72e334-e47a-4453-a457-df9eef8f4032 does not exist
Nov 29 01:50:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:18.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:18.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:18 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:19 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:50:20 np0005539508 podman[261890]: 2025-11-29 06:50:20.113204719 +0000 UTC m=+0.066666426 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:50:20 np0005539508 podman[261891]: 2025-11-29 06:50:20.145991891 +0000 UTC m=+0.099216792 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 01:50:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:20 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:22 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:23 np0005539508 nova_compute[251877]: 2025-11-29 06:50:23.153 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 10.94 sec#033[00m
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:24 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:26 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:28 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:50:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:50:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:30 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:32 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:34 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:36 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:38 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.596 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.597 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.728 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing inventories for resource provider 36ed0248-8d04-4532-95bb-daab89f12202 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.818 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating ProviderTree inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.818 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.833 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing aggregate associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.861 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing trait associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 01:50:39 np0005539508 nova_compute[251877]: 2025-11-29 06:50:39.878 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:50:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:50:40 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008365598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:50:40 np0005539508 nova_compute[251877]: 2025-11-29 06:50:40.311 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:50:40 np0005539508 nova_compute[251877]: 2025-11-29 06:50:40.320 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:50:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:40.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:40 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:41 np0005539508 podman[262018]: 2025-11-29 06:50:41.094677127 +0000 UTC m=+0.067693985 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 01:50:42 np0005539508 nova_compute[251877]: 2025-11-29 06:50:42.402 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 9.25 sec#033[00m
Nov 29 01:50:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:42.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:42 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:44 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:50:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:46.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:50:46 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:48 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:50.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:50 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:51 np0005539508 podman[262047]: 2025-11-29 06:50:51.139516906 +0000 UTC m=+0.099591242 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 01:50:51 np0005539508 podman[262048]: 2025-11-29 06:50:51.168131823 +0000 UTC m=+0.117402188 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 01:50:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:52.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:52 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:50:54
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', 'backups', '.mgr']
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:50:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:54.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:54 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:50:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:50:56 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:50:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:50:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:50:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:50:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:50:58 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:00 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:02 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 01:51:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:03 np0005539508 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 01:51:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 01:51:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:06.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 01:51:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:08.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 01:51:10 np0005539508 nova_compute[251877]: 2025-11-29 06:51:10.200 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:51:10 np0005539508 nova_compute[251877]: 2025-11-29 06:51:10.202 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:51:10 np0005539508 nova_compute[251877]: 2025-11-29 06:51:10.203 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 78.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:51:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 01:51:11 np0005539508 podman[262163]: 2025-11-29 06:51:11.577812771 +0000 UTC m=+0.107525754 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 01:51:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:12 np0005539508 nova_compute[251877]: 2025-11-29 06:51:12.697 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 20.29 sec#033[00m
Nov 29 01:51:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:51:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:51:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:14.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 01:51:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Nov 29 01:51:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.243 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:51:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.244 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:51:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.245 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:51:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 01:51:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:51:20 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:20 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:20 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 01:51:20 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 01:51:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 01:51:22 np0005539508 podman[262376]: 2025-11-29 06:51:22.099685376 +0000 UTC m=+0.063022715 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 01:51:22 np0005539508 podman[262377]: 2025-11-29 06:51:22.152685951 +0000 UTC m=+0.107460312 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:51:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:51:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9fa089c7-a11d-4ae1-8f1a-44bed0fa8eaa does not exist
Nov 29 01:51:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 17afbd12-5284-4de0-924b-2d8f1fe44297 does not exist
Nov 29 01:51:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 1da5f078-eb4b-4580-8ed0-aec8f2edc064 does not exist
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:51:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:51:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:23 np0005539508 podman[262561]: 2025-11-29 06:51:23.713781746 +0000 UTC m=+0.043088501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.006205313 +0000 UTC m=+0.335511968 container create 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:51:24 np0005539508 systemd[1]: Started libpod-conmon-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope.
Nov 29 01:51:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.292324346 +0000 UTC m=+0.621631091 container init 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.303115146 +0000 UTC m=+0.632421801 container start 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.306166391 +0000 UTC m=+0.635473086 container attach 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:51:24 np0005539508 systemd[1]: libpod-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope: Deactivated successfully.
Nov 29 01:51:24 np0005539508 condescending_chatterjee[262578]: 167 167
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.309987828 +0000 UTC m=+0.639294483 container died 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:51:24 np0005539508 conmon[262578]: conmon 70aa9f2fa562b64804b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope/container/memory.events
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:24 np0005539508 nova_compute[251877]: 2025-11-29 06:51:24.328 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:24 np0005539508 nova_compute[251877]: 2025-11-29 06:51:24.330 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:24 np0005539508 systemd[1]: var-lib-containers-storage-overlay-96c7920229d497bb6dece33ffd45879e3c56a94e1497413c4cfc97038064a4fe-merged.mount: Deactivated successfully.
Nov 29 01:51:24 np0005539508 podman[262561]: 2025-11-29 06:51:24.359784443 +0000 UTC m=+0.689091128 container remove 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:24 np0005539508 systemd[1]: libpod-conmon-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope: Deactivated successfully.
Nov 29 01:51:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:24 np0005539508 podman[262602]: 2025-11-29 06:51:24.566592659 +0000 UTC m=+0.055368442 container create 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:24 np0005539508 systemd[1]: Started libpod-conmon-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope.
Nov 29 01:51:24 np0005539508 podman[262602]: 2025-11-29 06:51:24.540306207 +0000 UTC m=+0.029081980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:24 np0005539508 podman[262602]: 2025-11-29 06:51:24.7297787 +0000 UTC m=+0.218554463 container init 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:51:24 np0005539508 podman[262602]: 2025-11-29 06:51:24.739447089 +0000 UTC m=+0.228222872 container start 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:51:24 np0005539508 podman[262602]: 2025-11-29 06:51:24.763662212 +0000 UTC m=+0.252438055 container attach 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:51:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:25 np0005539508 distracted_cray[262618]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:51:25 np0005539508 distracted_cray[262618]: --> relative data size: 1.0
Nov 29 01:51:25 np0005539508 distracted_cray[262618]: --> All data devices are unavailable
Nov 29 01:51:25 np0005539508 systemd[1]: libpod-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope: Deactivated successfully.
Nov 29 01:51:25 np0005539508 podman[262602]: 2025-11-29 06:51:25.737294618 +0000 UTC m=+1.226070401 container died 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:51:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.004000111s ======
Nov 29 01:51:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000111s
Nov 29 01:51:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54-merged.mount: Deactivated successfully.
Nov 29 01:51:26 np0005539508 podman[262602]: 2025-11-29 06:51:26.882443697 +0000 UTC m=+2.371219480 container remove 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:51:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:26 np0005539508 systemd[1]: libpod-conmon-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope: Deactivated successfully.
Nov 29 01:51:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.650658716 +0000 UTC m=+0.046330990 container create 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:27 np0005539508 systemd[1]: Started libpod-conmon-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope.
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.630795453 +0000 UTC m=+0.026467747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:27 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.768062203 +0000 UTC m=+0.163734487 container init 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.776500278 +0000 UTC m=+0.172172582 container start 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:27 np0005539508 vigorous_sutherland[262807]: 167 167
Nov 29 01:51:27 np0005539508 systemd[1]: libpod-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope: Deactivated successfully.
Nov 29 01:51:27 np0005539508 conmon[262807]: conmon 3ffe9df121bcd5d4c6f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope/container/memory.events
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.822523009 +0000 UTC m=+0.218195323 container attach 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.823349662 +0000 UTC m=+0.219021986 container died 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:27 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b4e9195dae577743af8b239b688a1b70bc57e9b1e489ffb666ce16002efbe703-merged.mount: Deactivated successfully.
Nov 29 01:51:27 np0005539508 podman[262791]: 2025-11-29 06:51:27.998952439 +0000 UTC m=+0.394624713 container remove 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:51:28 np0005539508 systemd[1]: libpod-conmon-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope: Deactivated successfully.
Nov 29 01:51:28 np0005539508 podman[262831]: 2025-11-29 06:51:28.187025083 +0000 UTC m=+0.046384412 container create 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 01:51:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:28 np0005539508 systemd[1]: Started libpod-conmon-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope.
Nov 29 01:51:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:28 np0005539508 podman[262831]: 2025-11-29 06:51:28.168349273 +0000 UTC m=+0.027708602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:28 np0005539508 podman[262831]: 2025-11-29 06:51:28.285033871 +0000 UTC m=+0.144393210 container init 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:51:28 np0005539508 podman[262831]: 2025-11-29 06:51:28.292714864 +0000 UTC m=+0.152074203 container start 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:51:28 np0005539508 podman[262831]: 2025-11-29 06:51:28.308313169 +0000 UTC m=+0.167672548 container attach 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:51:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:29 np0005539508 sharp_jang[262847]: {
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:    "1": [
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:        {
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "devices": [
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "/dev/loop3"
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            ],
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "lv_name": "ceph_lv0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "lv_size": "7511998464",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "name": "ceph_lv0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "tags": {
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.cluster_name": "ceph",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.crush_device_class": "",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.encrypted": "0",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.osd_id": "1",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.type": "block",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:                "ceph.vdo": "0"
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            },
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "type": "block",
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:            "vg_name": "ceph_vg0"
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:        }
Nov 29 01:51:29 np0005539508 sharp_jang[262847]:    ]
Nov 29 01:51:29 np0005539508 sharp_jang[262847]: }
Nov 29 01:51:29 np0005539508 systemd[1]: libpod-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope: Deactivated successfully.
Nov 29 01:51:29 np0005539508 podman[262831]: 2025-11-29 06:51:29.185429218 +0000 UTC m=+1.044788547 container died 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:29 np0005539508 nova_compute[251877]: 2025-11-29 06:51:29.402 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:29 np0005539508 nova_compute[251877]: 2025-11-29 06:51:29.404 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:51:29 np0005539508 nova_compute[251877]: 2025-11-29 06:51:29.405 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:51:29 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46-merged.mount: Deactivated successfully.
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:51:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:51:29 np0005539508 nova_compute[251877]: 2025-11-29 06:51:29.766 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 7.07 sec#033[00m
Nov 29 01:51:30 np0005539508 podman[262831]: 2025-11-29 06:51:30.300485659 +0000 UTC m=+2.159844998 container remove 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:51:30 np0005539508 systemd[1]: libpod-conmon-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope: Deactivated successfully.
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.501 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.501 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.503 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:51:30 np0005539508 nova_compute[251877]: 2025-11-29 06:51:30.504 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:51:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.088 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.089 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.089 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.090 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.091 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.070637501 +0000 UTC m=+0.037220556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.170913532 +0000 UTC m=+0.137496557 container create fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:51:31 np0005539508 systemd[1]: Started libpod-conmon-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope.
Nov 29 01:51:31 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.443149128 +0000 UTC m=+0.409732183 container init fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.450796971 +0000 UTC m=+0.417380006 container start fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:51:31 np0005539508 eager_cannon[263046]: 167 167
Nov 29 01:51:31 np0005539508 systemd[1]: libpod-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope: Deactivated successfully.
Nov 29 01:51:31 np0005539508 conmon[263046]: conmon fb00729189b82a88368c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope/container/memory.events
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.486415842 +0000 UTC m=+0.452998887 container attach fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:31 np0005539508 podman[263008]: 2025-11-29 06:51:31.487692498 +0000 UTC m=+0.454275513 container died fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:51:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:51:31 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763194160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.572 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:51:31 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8bde9d2d4c7f98f349bfc2b320cd15dbf9184016f5677a7efb6747bd6c92417b-merged.mount: Deactivated successfully.
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.741 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:51:31 np0005539508 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:51:32 np0005539508 podman[263008]: 2025-11-29 06:51:32.250269519 +0000 UTC m=+1.216852534 container remove fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:51:32 np0005539508 systemd[1]: libpod-conmon-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope: Deactivated successfully.
Nov 29 01:51:32 np0005539508 podman[263074]: 2025-11-29 06:51:32.468140623 +0000 UTC m=+0.085074038 container create 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:32 np0005539508 podman[263074]: 2025-11-29 06:51:32.417977217 +0000 UTC m=+0.034910642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:51:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:32 np0005539508 systemd[1]: Started libpod-conmon-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope.
Nov 29 01:51:32 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:51:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:32 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:51:32 np0005539508 podman[263074]: 2025-11-29 06:51:32.875049088 +0000 UTC m=+0.491982593 container init 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:32 np0005539508 podman[263074]: 2025-11-29 06:51:32.887861304 +0000 UTC m=+0.504794719 container start 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:51:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:51:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:51:32 np0005539508 nova_compute[251877]: 2025-11-29 06:51:32.919 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:51:32 np0005539508 nova_compute[251877]: 2025-11-29 06:51:32.920 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:51:32 np0005539508 nova_compute[251877]: 2025-11-29 06:51:32.943 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:51:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:33 np0005539508 podman[263074]: 2025-11-29 06:51:33.21362416 +0000 UTC m=+0.830557575 container attach 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 01:51:33 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:51:33 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133065214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:51:33 np0005539508 nova_compute[251877]: 2025-11-29 06:51:33.688 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:51:33 np0005539508 nova_compute[251877]: 2025-11-29 06:51:33.696 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:51:33 np0005539508 tender_sammet[263090]: {
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:        "osd_id": 1,
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:        "type": "bluestore"
Nov 29 01:51:33 np0005539508 tender_sammet[263090]:    }
Nov 29 01:51:33 np0005539508 tender_sammet[263090]: }
Nov 29 01:51:33 np0005539508 systemd[1]: libpod-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope: Deactivated successfully.
Nov 29 01:51:33 np0005539508 podman[263074]: 2025-11-29 06:51:33.752578109 +0000 UTC m=+1.369511524 container died 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 01:51:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce-merged.mount: Deactivated successfully.
Nov 29 01:51:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:35 np0005539508 podman[263074]: 2025-11-29 06:51:35.212162189 +0000 UTC m=+2.829095614 container remove 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:51:35 np0005539508 systemd[1]: libpod-conmon-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope: Deactivated successfully.
Nov 29 01:51:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:51:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:51:35 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 02fc4dca-0dba-485d-9d31-6521235950f7 does not exist
Nov 29 01:51:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 85cac403-ec7d-4055-a086-e80cfec1d036 does not exist
Nov 29 01:51:35 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev dd2363ae-eb36-4341-9f5e-466e076d25d7 does not exist
Nov 29 01:51:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:36 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:51:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:40 np0005539508 nova_compute[251877]: 2025-11-29 06:51:40.526 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:51:40 np0005539508 nova_compute[251877]: 2025-11-29 06:51:40.528 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:51:40 np0005539508 nova_compute[251877]: 2025-11-29 06:51:40.529 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:51:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 01:51:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 01:51:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:42 np0005539508 podman[263251]: 2025-11-29 06:51:42.169403335 +0000 UTC m=+0.119202258 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 01:51:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:43 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:51:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:51:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:48 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:51:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:51:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:53 np0005539508 podman[263281]: 2025-11-29 06:51:53.120594179 +0000 UTC m=+0.078742119 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 01:51:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:53 np0005539508 podman[263282]: 2025-11-29 06:51:53.224838692 +0000 UTC m=+0.181206642 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:51:54
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:51:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:51:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:56 np0005539508 nova_compute[251877]: 2025-11-29 06:51:56.038 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 6.27 sec#033[00m
Nov 29 01:51:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:56.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:51:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:51:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:51:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:51:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:51:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:51:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:58.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:51:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:52:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:52:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:13 np0005539508 podman[263393]: 2025-11-29 06:52:13.111816566 +0000 UTC m=+0.070251850 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:52:13 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:52:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:52:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.244 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:52:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:52:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:52:18 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:24 np0005539508 podman[263473]: 2025-11-29 06:52:24.091280272 +0000 UTC m=+0.059618352 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 01:52:24 np0005539508 podman[263474]: 2025-11-29 06:52:24.133730573 +0000 UTC m=+0.095809528 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:24.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:26.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:27 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 01:52:27 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:27.499645) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:52:27 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 01:52:27 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399147499706, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 4139105, "memory_usage": 4200592, "flush_reason": "Manual Compaction"}
Nov 29 01:52:27 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 01:52:28 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399148184337, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 4024087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20425, "largest_seqno": 22530, "table_properties": {"data_size": 4014455, "index_size": 6126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19000, "raw_average_key_size": 20, "raw_value_size": 3995455, "raw_average_value_size": 4214, "num_data_blocks": 274, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398925, "oldest_key_time": 1764398925, "file_creation_time": 1764399147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:52:28 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 684750 microseconds, and 16535 cpu microseconds.
Nov 29 01:52:28 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:52:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:28.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:28.184392) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 4024087 bytes OK
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:28.184419) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127134) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127197) EVENT_LOG_v1 {"time_micros": 1764399149127185, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4130525, prev total WAL file size 4138461, number of live WAL files 2.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.154935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(3929KB)], [47(7197KB)]
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149155063, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 11394448, "oldest_snapshot_seqno": -1}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5129 keys, 9348926 bytes, temperature: kUnknown
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149288673, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9348926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9314170, "index_size": 20822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 128794, "raw_average_key_size": 25, "raw_value_size": 9220801, "raw_average_value_size": 1797, "num_data_blocks": 857, "num_entries": 5129, "num_filter_entries": 5129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.289417) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9348926 bytes
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.295001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.2 rd, 69.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 5648, records dropped: 519 output_compression: NoCompression
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.295043) EVENT_LOG_v1 {"time_micros": 1764399149295024, "job": 24, "event": "compaction_finished", "compaction_time_micros": 133708, "compaction_time_cpu_micros": 42820, "output_level": 6, "num_output_files": 1, "total_output_size": 9348926, "num_input_records": 5648, "num_output_records": 5129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149296739, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149299977, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.154743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149300544, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 256, "total_data_size": 43721, "memory_usage": 50824, "flush_reason": "Manual Compaction"}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149303801, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 44177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22531, "largest_seqno": 22799, "table_properties": {"data_size": 42314, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4366, "raw_average_key_size": 16, "raw_value_size": 38714, "raw_average_value_size": 145, "num_data_blocks": 4, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764399149, "oldest_key_time": 1764399149, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3352 microseconds, and 1197 cpu microseconds.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.303855) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 44177 bytes OK
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.303939) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305789) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305814) EVENT_LOG_v1 {"time_micros": 1764399149305806, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 41642, prev total WAL file size 41642, number of live WAL files 2.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.306323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323534' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(43KB)], [50(9129KB)]
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149306372, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9393103, "oldest_snapshot_seqno": -1}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4876 keys, 9259051 bytes, temperature: kUnknown
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149402170, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9259051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9225451, "index_size": 20306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 124728, "raw_average_key_size": 25, "raw_value_size": 9135937, "raw_average_value_size": 1873, "num_data_blocks": 830, "num_entries": 4876, "num_filter_entries": 4876, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.402857) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9259051 bytes
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.405009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.9 rd, 96.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 8.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(422.2) write-amplify(209.6) OK, records in: 5395, records dropped: 519 output_compression: NoCompression
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.405040) EVENT_LOG_v1 {"time_micros": 1764399149405026, "job": 26, "event": "compaction_finished", "compaction_time_micros": 95897, "compaction_time_cpu_micros": 36542, "output_level": 6, "num_output_files": 1, "total_output_size": 9259051, "num_input_records": 5395, "num_output_records": 4876, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149405508, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149408847, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.306264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:52:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:52:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:30.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:32.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:33.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:34.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 01:52:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:36.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 01:52:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 01:52:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 01:52:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:52:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:38.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:52:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:39 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 55cde076-0d46-43a1-9b8b-69f9b9d58e27 does not exist
Nov 29 01:52:39 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 0510e2d2-ce6d-4bff-ab43-14e1d6a1b85d does not exist
Nov 29 01:52:39 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5004ea57-5c85-496f-b414-4cfe043fb214 does not exist
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 01:52:39 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.188060468 +0000 UTC m=+0.045005513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.464443748 +0000 UTC m=+0.321388743 container create 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:52:40 np0005539508 nova_compute[251877]: 2025-11-29 06:52:40.532 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:52:40 np0005539508 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:52:40 np0005539508 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:52:40 np0005539508 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:52:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:40 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:52:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:40 np0005539508 systemd[1]: Started libpod-conmon-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope.
Nov 29 01:52:40 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.75839183 +0000 UTC m=+0.615336825 container init 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.772225918 +0000 UTC m=+0.629170873 container start 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.77836293 +0000 UTC m=+0.635307975 container attach 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:52:40 np0005539508 hopeful_cohen[263869]: 167 167
Nov 29 01:52:40 np0005539508 systemd[1]: libpod-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope: Deactivated successfully.
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.780982213 +0000 UTC m=+0.637927178 container died 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 01:52:40 np0005539508 systemd[1]: var-lib-containers-storage-overlay-1b876d9cad8d01be45621d05765b94f24f79dec86f669bde1270e1b7626651e9-merged.mount: Deactivated successfully.
Nov 29 01:52:40 np0005539508 podman[263852]: 2025-11-29 06:52:40.82722537 +0000 UTC m=+0.684170355 container remove 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:52:40 np0005539508 systemd[1]: libpod-conmon-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope: Deactivated successfully.
Nov 29 01:52:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:52:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:52:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:41 np0005539508 podman[263892]: 2025-11-29 06:52:41.06866516 +0000 UTC m=+0.064888941 container create 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:52:41 np0005539508 systemd[1]: Started libpod-conmon-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope.
Nov 29 01:52:41 np0005539508 podman[263892]: 2025-11-29 06:52:41.043783202 +0000 UTC m=+0.040007023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:41 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:41 np0005539508 podman[263892]: 2025-11-29 06:52:41.181422512 +0000 UTC m=+0.177646313 container init 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:52:41 np0005539508 podman[263892]: 2025-11-29 06:52:41.193626484 +0000 UTC m=+0.189850285 container start 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:52:41 np0005539508 podman[263892]: 2025-11-29 06:52:41.198054518 +0000 UTC m=+0.194278339 container attach 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:52:41 np0005539508 recursing_cannon[263909]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:52:41 np0005539508 recursing_cannon[263909]: --> relative data size: 1.0
Nov 29 01:52:41 np0005539508 recursing_cannon[263909]: --> All data devices are unavailable
Nov 29 01:52:42 np0005539508 systemd[1]: libpod-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope: Deactivated successfully.
Nov 29 01:52:42 np0005539508 podman[263892]: 2025-11-29 06:52:42.00413532 +0000 UTC m=+1.000359091 container died 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:52:42 np0005539508 systemd[1]: var-lib-containers-storage-overlay-34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3-merged.mount: Deactivated successfully.
Nov 29 01:52:42 np0005539508 podman[263892]: 2025-11-29 06:52:42.058068372 +0000 UTC m=+1.054292163 container remove 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:52:42 np0005539508 systemd[1]: libpod-conmon-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope: Deactivated successfully.
Nov 29 01:52:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:42.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.779300145 +0000 UTC m=+0.040780815 container create ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 01:52:42 np0005539508 systemd[1]: Started libpod-conmon-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope.
Nov 29 01:52:42 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.761704301 +0000 UTC m=+0.023184991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.870835041 +0000 UTC m=+0.132315731 container init ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.880602375 +0000 UTC m=+0.142083045 container start ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.884676569 +0000 UTC m=+0.146157239 container attach ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:52:42 np0005539508 gifted_herschel[264095]: 167 167
Nov 29 01:52:42 np0005539508 systemd[1]: libpod-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope: Deactivated successfully.
Nov 29 01:52:42 np0005539508 conmon[264095]: conmon ee5b9ca561a0507d37dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope/container/memory.events
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.887448657 +0000 UTC m=+0.148929357 container died ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:52:42 np0005539508 systemd[1]: var-lib-containers-storage-overlay-7a86eb36749eb6541f079174421735e0555a8b5518142a8501390198a44f8dba-merged.mount: Deactivated successfully.
Nov 29 01:52:42 np0005539508 podman[264079]: 2025-11-29 06:52:42.93889755 +0000 UTC m=+0.200378220 container remove ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:52:42 np0005539508 systemd[1]: libpod-conmon-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope: Deactivated successfully.
Nov 29 01:52:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:43 np0005539508 podman[264120]: 2025-11-29 06:52:43.186101361 +0000 UTC m=+0.073188943 container create 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:52:43 np0005539508 systemd[1]: Started libpod-conmon-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope.
Nov 29 01:52:43 np0005539508 podman[264120]: 2025-11-29 06:52:43.154179176 +0000 UTC m=+0.041266818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:43 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:43 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:43 np0005539508 podman[264120]: 2025-11-29 06:52:43.299188082 +0000 UTC m=+0.186275694 container init 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 01:52:43 np0005539508 podman[264120]: 2025-11-29 06:52:43.309198932 +0000 UTC m=+0.196286504 container start 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:52:43 np0005539508 podman[264120]: 2025-11-29 06:52:43.313847513 +0000 UTC m=+0.200935085 container attach 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:52:43 np0005539508 podman[264134]: 2025-11-29 06:52:43.357280651 +0000 UTC m=+0.119953525 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]: {
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:    "1": [
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:        {
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "devices": [
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "/dev/loop3"
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            ],
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "lv_name": "ceph_lv0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "lv_size": "7511998464",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "name": "ceph_lv0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "tags": {
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.cluster_name": "ceph",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.crush_device_class": "",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.encrypted": "0",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.osd_id": "1",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.type": "block",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:                "ceph.vdo": "0"
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            },
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "type": "block",
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:            "vg_name": "ceph_vg0"
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:        }
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]:    ]
Nov 29 01:52:44 np0005539508 vigilant_faraday[264137]: }
Nov 29 01:52:44 np0005539508 systemd[1]: libpod-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope: Deactivated successfully.
Nov 29 01:52:44 np0005539508 podman[264120]: 2025-11-29 06:52:44.05024286 +0000 UTC m=+0.937330402 container died 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:52:44 np0005539508 systemd[1]: var-lib-containers-storage-overlay-f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33-merged.mount: Deactivated successfully.
Nov 29 01:52:44 np0005539508 podman[264120]: 2025-11-29 06:52:44.111125967 +0000 UTC m=+0.998213509 container remove 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:52:44 np0005539508 systemd[1]: libpod-conmon-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope: Deactivated successfully.
Nov 29 01:52:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:44 np0005539508 podman[264321]: 2025-11-29 06:52:44.86623471 +0000 UTC m=+0.096904789 container create 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:52:44 np0005539508 podman[264321]: 2025-11-29 06:52:44.798606413 +0000 UTC m=+0.029276562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:44 np0005539508 systemd[1]: Started libpod-conmon-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope.
Nov 29 01:52:44 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:45 np0005539508 podman[264321]: 2025-11-29 06:52:44.999749553 +0000 UTC m=+0.230419702 container init 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 01:52:45 np0005539508 podman[264321]: 2025-11-29 06:52:45.011845832 +0000 UTC m=+0.242515911 container start 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:52:45 np0005539508 podman[264321]: 2025-11-29 06:52:45.01638933 +0000 UTC m=+0.247059499 container attach 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:52:45 np0005539508 vibrant_ramanujan[264337]: 167 167
Nov 29 01:52:45 np0005539508 systemd[1]: libpod-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope: Deactivated successfully.
Nov 29 01:52:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:52:45 np0005539508 conmon[264337]: conmon 9e3b8a1a11f259771043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope/container/memory.events
Nov 29 01:52:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:52:45 np0005539508 podman[264321]: 2025-11-29 06:52:45.022214693 +0000 UTC m=+0.252884802 container died 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:52:45 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8764278fbd62e8f219fecdca25fe15f8ddef7e12c440487c974abc7059d2720c-merged.mount: Deactivated successfully.
Nov 29 01:52:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:45 np0005539508 podman[264321]: 2025-11-29 06:52:45.065960959 +0000 UTC m=+0.296631028 container remove 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 01:52:45 np0005539508 systemd[1]: libpod-conmon-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope: Deactivated successfully.
Nov 29 01:52:45 np0005539508 podman[264362]: 2025-11-29 06:52:45.242614133 +0000 UTC m=+0.052523814 container create 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 01:52:45 np0005539508 systemd[1]: Started libpod-conmon-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope.
Nov 29 01:52:45 np0005539508 podman[264362]: 2025-11-29 06:52:45.221153301 +0000 UTC m=+0.031063022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:52:45 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:52:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:45 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:52:45 np0005539508 podman[264362]: 2025-11-29 06:52:45.330638451 +0000 UTC m=+0.140548142 container init 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 01:52:45 np0005539508 podman[264362]: 2025-11-29 06:52:45.341230138 +0000 UTC m=+0.151139809 container start 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:52:45 np0005539508 podman[264362]: 2025-11-29 06:52:45.345269771 +0000 UTC m=+0.155179462 container attach 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]: {
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:        "osd_id": 1,
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:        "type": "bluestore"
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]:    }
Nov 29 01:52:46 np0005539508 recursing_poitras[264378]: }
Nov 29 01:52:46 np0005539508 systemd[1]: libpod-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope: Deactivated successfully.
Nov 29 01:52:46 np0005539508 podman[264362]: 2025-11-29 06:52:46.217670742 +0000 UTC m=+1.027580453 container died 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:52:46 np0005539508 systemd[1]: var-lib-containers-storage-overlay-e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39-merged.mount: Deactivated successfully.
Nov 29 01:52:46 np0005539508 podman[264362]: 2025-11-29 06:52:46.352173053 +0000 UTC m=+1.162082734 container remove 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:52:46 np0005539508 systemd[1]: libpod-conmon-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope: Deactivated successfully.
Nov 29 01:52:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:52:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:52:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev c80b7213-8227-4d4b-a2de-8f8fb582a31e does not exist
Nov 29 01:52:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev f68f7640-dbb7-4640-bb41-7ac24cafa439 does not exist
Nov 29 01:52:46 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev dd2d459b-7e4f-47e5-ac01-4d063ba52b39 does not exist
Nov 29 01:52:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:46.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:47 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:52:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:48.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:50.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:52:54
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.meta']
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:52:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:52:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:55.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:55 np0005539508 podman[264467]: 2025-11-29 06:52:55.14893299 +0000 UTC m=+0.100413367 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 01:52:55 np0005539508 podman[264468]: 2025-11-29 06:52:55.19671359 +0000 UTC m=+0.148319860 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 01:52:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:52:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:52:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:52:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:52:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:52:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:52:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:00.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:03.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:04.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:06.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:07.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:08.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:09.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:10.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:12.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:53:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:53:14 np0005539508 podman[264573]: 2025-11-29 06:53:14.101335824 +0000 UTC m=+0.065931930 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 01:53:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:15.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:16.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:17.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.245 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:53:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.247 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:53:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.295 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.296 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.298 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.298 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.299 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.299 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:18 np0005539508 nova_compute[251877]: 2025-11-29 06:53:18.921 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 52.88 sec#033[00m
Nov 29 01:53:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:21.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:22.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.180 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.182 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:53:23 np0005539508 nova_compute[251877]: 2025-11-29 06:53:23.730 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:53:24 np0005539508 nova_compute[251877]: 2025-11-29 06:53:24.005 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:53:24 np0005539508 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:53:24 np0005539508 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:53:24 np0005539508 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:24.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:25.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:26 np0005539508 podman[264674]: 2025-11-29 06:53:26.10866493 +0000 UTC m=+0.070525588 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:53:26 np0005539508 podman[264675]: 2025-11-29 06:53:26.183033995 +0000 UTC m=+0.140208562 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 01:53:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:26.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:27.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:29.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:29 np0005539508 nova_compute[251877]: 2025-11-29 06:53:29.176 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.25 sec#033[00m
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:53:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:53:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:33.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:35.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 01:53:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 01:53:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 01:53:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 01:53:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:39.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 01:53:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 01:53:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 01:53:39 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 01:53:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:40.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:42.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:45 np0005539508 podman[264793]: 2025-11-29 06:53:45.112977028 +0000 UTC m=+0.076749953 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 01:53:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:53:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 74870220-ebec-41ea-8c18-54ac50238915 does not exist
Nov 29 01:53:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev d7485278-69d0-4a24-a944-149c4607bb60 does not exist
Nov 29 01:53:47 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev eda73ca2-2de4-4ebc-9f3e-ba5e88264c0d does not exist
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:53:47 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:53:48 np0005539508 podman[265091]: 2025-11-29 06:53:48.66924455 +0000 UTC m=+0.038525821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:48.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:48 np0005539508 podman[265091]: 2025-11-29 06:53:48.967836402 +0000 UTC m=+0.337117633 container create 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 01:53:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:53:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:53:48 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:53:49 np0005539508 systemd[1]: Started libpod-conmon-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope.
Nov 29 01:53:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:49 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:49.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:49 np0005539508 podman[265091]: 2025-11-29 06:53:49.11257511 +0000 UTC m=+0.481856411 container init 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:53:49 np0005539508 podman[265091]: 2025-11-29 06:53:49.124010211 +0000 UTC m=+0.493291412 container start 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 01:53:49 np0005539508 podman[265091]: 2025-11-29 06:53:49.127978272 +0000 UTC m=+0.497259473 container attach 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:53:49 np0005539508 cool_tharp[265108]: 167 167
Nov 29 01:53:49 np0005539508 systemd[1]: libpod-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope: Deactivated successfully.
Nov 29 01:53:49 np0005539508 podman[265091]: 2025-11-29 06:53:49.133297951 +0000 UTC m=+0.502579152 container died 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:53:49 np0005539508 systemd[1]: var-lib-containers-storage-overlay-60d597a257115b476d4c5b0aa24a3c19f1697635a82ad308b2a56a14a458e8a9-merged.mount: Deactivated successfully.
Nov 29 01:53:49 np0005539508 podman[265091]: 2025-11-29 06:53:49.183631862 +0000 UTC m=+0.552913053 container remove 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 01:53:49 np0005539508 systemd[1]: libpod-conmon-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope: Deactivated successfully.
Nov 29 01:53:49 np0005539508 podman[265133]: 2025-11-29 06:53:49.372619831 +0000 UTC m=+0.061096294 container create 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:49 np0005539508 podman[265133]: 2025-11-29 06:53:49.348748122 +0000 UTC m=+0.037224565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:49 np0005539508 systemd[1]: Started libpod-conmon-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope.
Nov 29 01:53:49 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:49 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:49 np0005539508 podman[265133]: 2025-11-29 06:53:49.855811279 +0000 UTC m=+0.544287802 container init 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 01:53:49 np0005539508 podman[265133]: 2025-11-29 06:53:49.869291507 +0000 UTC m=+0.557767970 container start 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:53:50 np0005539508 podman[265133]: 2025-11-29 06:53:50.098223126 +0000 UTC m=+0.786699649 container attach 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:50 np0005539508 xenodochial_kirch[265150]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:53:50 np0005539508 xenodochial_kirch[265150]: --> relative data size: 1.0
Nov 29 01:53:50 np0005539508 xenodochial_kirch[265150]: --> All data devices are unavailable
Nov 29 01:53:50 np0005539508 systemd[1]: libpod-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope: Deactivated successfully.
Nov 29 01:53:50 np0005539508 podman[265133]: 2025-11-29 06:53:50.679727191 +0000 UTC m=+1.368203624 container died 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:50 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109-merged.mount: Deactivated successfully.
Nov 29 01:53:50 np0005539508 podman[265133]: 2025-11-29 06:53:50.741184654 +0000 UTC m=+1.429661087 container remove 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:53:50 np0005539508 systemd[1]: libpod-conmon-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope: Deactivated successfully.
Nov 29 01:53:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:50.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:51.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.485568536 +0000 UTC m=+0.044917730 container create d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 01:53:51 np0005539508 systemd[1]: Started libpod-conmon-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope.
Nov 29 01:53:51 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.465327079 +0000 UTC m=+0.024676323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.560841396 +0000 UTC m=+0.120190580 container init d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.568252964 +0000 UTC m=+0.127602198 container start d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.572137623 +0000 UTC m=+0.131486837 container attach d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:53:51 np0005539508 inspiring_bouman[265337]: 167 167
Nov 29 01:53:51 np0005539508 systemd[1]: libpod-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope: Deactivated successfully.
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.578698577 +0000 UTC m=+0.138047771 container died d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 01:53:51 np0005539508 systemd[1]: var-lib-containers-storage-overlay-090e22c2823eb1a88ee9011518b105a6782830cca9479d3c82e085aec3ef5cff-merged.mount: Deactivated successfully.
Nov 29 01:53:51 np0005539508 podman[265321]: 2025-11-29 06:53:51.620312263 +0000 UTC m=+0.179661457 container remove d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:53:51 np0005539508 systemd[1]: libpod-conmon-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope: Deactivated successfully.
Nov 29 01:53:51 np0005539508 podman[265360]: 2025-11-29 06:53:51.786655218 +0000 UTC m=+0.037194654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:52 np0005539508 nova_compute[251877]: 2025-11-29 06:53:52.103 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:53:52 np0005539508 nova_compute[251877]: 2025-11-29 06:53:52.106 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:53:52 np0005539508 nova_compute[251877]: 2025-11-29 06:53:52.146 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:53:52 np0005539508 podman[265360]: 2025-11-29 06:53:52.285165295 +0000 UTC m=+0.535704771 container create 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:53:52 np0005539508 systemd[1]: Started libpod-conmon-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope.
Nov 29 01:53:52 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:52 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:52.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:52 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:53:52 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185560197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:53:52 np0005539508 nova_compute[251877]: 2025-11-29 06:53:52.874 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:53:52 np0005539508 nova_compute[251877]: 2025-11-29 06:53:52.881 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:53:52 np0005539508 podman[265360]: 2025-11-29 06:53:52.901388123 +0000 UTC m=+1.151927599 container init 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:52 np0005539508 podman[265360]: 2025-11-29 06:53:52.914295545 +0000 UTC m=+1.164834941 container start 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:53:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:53 np0005539508 podman[265360]: 2025-11-29 06:53:53.279833994 +0000 UTC m=+1.530373480 container attach 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]: {
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:    "1": [
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:        {
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "devices": [
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "/dev/loop3"
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            ],
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "lv_name": "ceph_lv0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "lv_size": "7511998464",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "name": "ceph_lv0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "tags": {
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.cluster_name": "ceph",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.crush_device_class": "",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.encrypted": "0",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.osd_id": "1",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.type": "block",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:                "ceph.vdo": "0"
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            },
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "type": "block",
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:            "vg_name": "ceph_vg0"
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:        }
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]:    ]
Nov 29 01:53:53 np0005539508 thirsty_chebyshev[265396]: }
Nov 29 01:53:53 np0005539508 systemd[1]: libpod-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope: Deactivated successfully.
Nov 29 01:53:53 np0005539508 podman[265360]: 2025-11-29 06:53:53.690579901 +0000 UTC m=+1.941119337 container died 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:53:54
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:53:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:53:54 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c-merged.mount: Deactivated successfully.
Nov 29 01:53:54 np0005539508 podman[265360]: 2025-11-29 06:53:54.397628185 +0000 UTC m=+2.648167581 container remove 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:54 np0005539508 systemd[1]: libpod-conmon-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope: Deactivated successfully.
Nov 29 01:53:54 np0005539508 nova_compute[251877]: 2025-11-29 06:53:54.474 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:53:54 np0005539508 nova_compute[251877]: 2025-11-29 06:53:54.477 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:53:54 np0005539508 nova_compute[251877]: 2025-11-29 06:53:54.478 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 30.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:53:54 np0005539508 nova_compute[251877]: 2025-11-29 06:53:54.479 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:53:54 np0005539508 nova_compute[251877]: 2025-11-29 06:53:54.480 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 01:53:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.218619334 +0000 UTC m=+0.035691922 container create cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:53:55 np0005539508 systemd[1]: Started libpod-conmon-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope.
Nov 29 01:53:55 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.279035678 +0000 UTC m=+0.096108286 container init cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.285251352 +0000 UTC m=+0.102323930 container start cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.288508393 +0000 UTC m=+0.105581001 container attach cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:53:55 np0005539508 objective_solomon[265575]: 167 167
Nov 29 01:53:55 np0005539508 systemd[1]: libpod-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope: Deactivated successfully.
Nov 29 01:53:55 np0005539508 conmon[265575]: conmon cff4cf1240ff526cac91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope/container/memory.events
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.292419003 +0000 UTC m=+0.109491601 container died cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.203822429 +0000 UTC m=+0.020895027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:55 np0005539508 systemd[1]: var-lib-containers-storage-overlay-bcaed3e20aca9184cd78bbf2a63e608ae49d32b4762b104d4e151d527404ae5f-merged.mount: Deactivated successfully.
Nov 29 01:53:55 np0005539508 podman[265559]: 2025-11-29 06:53:55.539580223 +0000 UTC m=+0.356652801 container remove cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 01:53:55 np0005539508 systemd[1]: libpod-conmon-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope: Deactivated successfully.
Nov 29 01:53:55 np0005539508 podman[265601]: 2025-11-29 06:53:55.711919065 +0000 UTC m=+0.027508882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:53:56 np0005539508 podman[265601]: 2025-11-29 06:53:56.109828282 +0000 UTC m=+0.425418119 container create 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:53:56 np0005539508 systemd[1]: Started libpod-conmon-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope.
Nov 29 01:53:56 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:53:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:56 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:53:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:53:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:56.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:53:56 np0005539508 podman[265601]: 2025-11-29 06:53:56.921464819 +0000 UTC m=+1.237054646 container init 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:53:56 np0005539508 podman[265601]: 2025-11-29 06:53:56.935631466 +0000 UTC m=+1.251221293 container start 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 01:53:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:57 np0005539508 podman[265601]: 2025-11-29 06:53:57.186915042 +0000 UTC m=+1.502504909 container attach 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:53:57 np0005539508 podman[265620]: 2025-11-29 06:53:57.245239617 +0000 UTC m=+0.732896760 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 01:53:57 np0005539508 podman[265621]: 2025-11-29 06:53:57.286825463 +0000 UTC m=+0.775242228 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]: {
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:        "osd_id": 1,
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:        "type": "bluestore"
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]:    }
Nov 29 01:53:57 np0005539508 frosty_lovelace[265618]: }
Nov 29 01:53:57 np0005539508 systemd[1]: libpod-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope: Deactivated successfully.
Nov 29 01:53:57 np0005539508 podman[265601]: 2025-11-29 06:53:57.865088747 +0000 UTC m=+2.180678604 container died 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:53:58 np0005539508 systemd[1]: var-lib-containers-storage-overlay-6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b-merged.mount: Deactivated successfully.
Nov 29 01:53:58 np0005539508 podman[265601]: 2025-11-29 06:53:58.639041309 +0000 UTC m=+2.954631136 container remove 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 01:53:58 np0005539508 systemd[1]: libpod-conmon-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope: Deactivated successfully.
Nov 29 01:53:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:53:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:53:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:53:58 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:53:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 9134e5b1-3cc7-478b-a7f4-4568d6f2d22d does not exist
Nov 29 01:53:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev edec7791-c309-493b-8e3d-b167a88696fe does not exist
Nov 29 01:53:58 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 415624cf-b660-4e64-a9c9-b87976fe1667 does not exist
Nov 29 01:53:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:53:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:53:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:53:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:53:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:53:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:53:59 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:54:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:00.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:02 np0005539508 nova_compute[251877]: 2025-11-29 06:54:02.643 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 13.47 sec#033[00m
Nov 29 01:54:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:03 np0005539508 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 01:54:03 np0005539508 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:54:03 np0005539508 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 01:54:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:04 np0005539508 nova_compute[251877]: 2025-11-29 06:54:04.481 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:54:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:05.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:07.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:10.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:54:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:54:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:14.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:15.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:16 np0005539508 podman[265809]: 2025-11-29 06:54:16.123266718 +0000 UTC m=+0.080054805 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:54:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:17.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:54:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.247 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:54:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:54:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:54:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:54:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:22.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:24.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:25 np0005539508 nova_compute[251877]: 2025-11-29 06:54:25.280 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.63 sec#033[00m
Nov 29 01:54:26 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:26 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:26 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:28 np0005539508 podman[265894]: 2025-11-29 06:54:28.130506351 +0000 UTC m=+0.089851981 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 01:54:28 np0005539508 podman[265893]: 2025-11-29 06:54:28.146649413 +0000 UTC m=+0.099496301 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:54:28 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:28 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:28 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:29.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:54:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:54:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:30 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:30 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:30 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:32 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:32 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:32 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:34 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:34 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:34 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:35.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:36 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:36 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:36 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:36.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:54:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:54:38 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:38 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:38 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:39.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:40 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:40 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:40 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:40 np0005539508 nova_compute[251877]: 2025-11-29 06:54:40.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:54:40 np0005539508 nova_compute[251877]: 2025-11-29 06:54:40.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:54:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:42 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:42 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:42 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:42.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:43.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:44 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:44 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:44 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:44.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:45.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:46 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:46 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:46 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:46.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:47 np0005539508 podman[265998]: 2025-11-29 06:54:47.138452673 +0000 UTC m=+0.097160495 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 01:54:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:48 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:48 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:48 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:49 np0005539508 nova_compute[251877]: 2025-11-29 06:54:49.073 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.80 sec#033[00m
Nov 29 01:54:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:49.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:49 np0005539508 nova_compute[251877]: 2025-11-29 06:54:49.364 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:54:49 np0005539508 nova_compute[251877]: 2025-11-29 06:54:49.364 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:54:49 np0005539508 nova_compute[251877]: 2025-11-29 06:54:49.365 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:54:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:50 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:50 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:50 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:50.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:51.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:52 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:52 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:52 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:52.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:53.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:54:54
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Nov 29 01:54:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:54:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:54:54 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:54 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:54:54 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:54.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:54:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:55.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:56 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:56 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:56 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:56.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:58 np0005539508 podman[266054]: 2025-11-29 06:54:58.376215861 +0000 UTC m=+0.077030921 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 29 01:54:58 np0005539508 podman[266055]: 2025-11-29 06:54:58.400605875 +0000 UTC m=+0.097751632 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 01:54:58 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:58 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:58 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:54:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:54:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:54:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:54:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:00 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5c087e73-0188-421e-b770-445446019298 does not exist
Nov 29 01:55:00 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 31f5b2f4-53e1-4661-9fb8-b6483ded1400 does not exist
Nov 29 01:55:00 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 3b417d2d-f933-4e90-b768-2c81bc7e332e does not exist
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:55:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:55:00 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:00 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:00 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:00.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:01 np0005539508 podman[266405]: 2025-11-29 06:55:01.436191406 +0000 UTC m=+0.043110809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:01 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:55:01 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:01 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:55:02 np0005539508 podman[266405]: 2025-11-29 06:55:02.070457461 +0000 UTC m=+0.677376814 container create 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:55:02 np0005539508 systemd[1]: Started libpod-conmon-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope.
Nov 29 01:55:02 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:02 np0005539508 podman[266405]: 2025-11-29 06:55:02.652461349 +0000 UTC m=+1.259380742 container init 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:55:02 np0005539508 podman[266405]: 2025-11-29 06:55:02.664496736 +0000 UTC m=+1.271416089 container start 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:55:02 np0005539508 dazzling_heyrovsky[266422]: 167 167
Nov 29 01:55:02 np0005539508 systemd[1]: libpod-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope: Deactivated successfully.
Nov 29 01:55:02 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:02 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:02 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:02.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:03 np0005539508 podman[266405]: 2025-11-29 06:55:03.317212046 +0000 UTC m=+1.924131449 container attach 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:55:03 np0005539508 podman[266405]: 2025-11-29 06:55:03.31876976 +0000 UTC m=+1.925689123 container died 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:55:04 np0005539508 systemd[1]: var-lib-containers-storage-overlay-9cc6ad4db05974213ff2f4c91fa34c81b3338dc7c747344d2bacf058b4492625-merged.mount: Deactivated successfully.
Nov 29 01:55:04 np0005539508 podman[266405]: 2025-11-29 06:55:04.34844431 +0000 UTC m=+2.955363653 container remove 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 01:55:04 np0005539508 systemd[1]: libpod-conmon-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope: Deactivated successfully.
Nov 29 01:55:04 np0005539508 podman[266448]: 2025-11-29 06:55:04.494976919 +0000 UTC m=+0.023711896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:04 np0005539508 podman[266448]: 2025-11-29 06:55:04.668627138 +0000 UTC m=+0.197362075 container create c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:55:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:04 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:04 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:04 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:05 np0005539508 systemd[1]: Started libpod-conmon-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope.
Nov 29 01:55:05 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:05 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:05 np0005539508 podman[266448]: 2025-11-29 06:55:05.731699235 +0000 UTC m=+1.260434222 container init c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 01:55:05 np0005539508 podman[266448]: 2025-11-29 06:55:05.742663062 +0000 UTC m=+1.271397989 container start c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:55:05 np0005539508 podman[266448]: 2025-11-29 06:55:05.768849366 +0000 UTC m=+1.297584303 container attach c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:55:06 np0005539508 infallible_goldwasser[266466]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:55:06 np0005539508 infallible_goldwasser[266466]: --> relative data size: 1.0
Nov 29 01:55:06 np0005539508 infallible_goldwasser[266466]: --> All data devices are unavailable
Nov 29 01:55:06 np0005539508 systemd[1]: libpod-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope: Deactivated successfully.
Nov 29 01:55:06 np0005539508 podman[266481]: 2025-11-29 06:55:06.716524787 +0000 UTC m=+0.050078074 container died c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:55:06 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:06 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:06 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:07 np0005539508 systemd[1]: var-lib-containers-storage-overlay-21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1-merged.mount: Deactivated successfully.
Nov 29 01:55:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:07 np0005539508 podman[266481]: 2025-11-29 06:55:07.88072801 +0000 UTC m=+1.214281287 container remove c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:55:07 np0005539508 systemd[1]: libpod-conmon-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope: Deactivated successfully.
Nov 29 01:55:08 np0005539508 podman[266639]: 2025-11-29 06:55:08.744238661 +0000 UTC m=+0.037307737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:08 np0005539508 podman[266639]: 2025-11-29 06:55:08.948434607 +0000 UTC m=+0.241503653 container create 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:55:08 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:08 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:08 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:09 np0005539508 systemd[1]: Started libpod-conmon-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope.
Nov 29 01:55:09 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:09 np0005539508 podman[266639]: 2025-11-29 06:55:09.166048928 +0000 UTC m=+0.459118064 container init 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 01:55:09 np0005539508 podman[266639]: 2025-11-29 06:55:09.177327595 +0000 UTC m=+0.470396641 container start 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:55:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:09 np0005539508 jovial_dirac[266655]: 167 167
Nov 29 01:55:09 np0005539508 systemd[1]: libpod-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope: Deactivated successfully.
Nov 29 01:55:09 np0005539508 podman[266639]: 2025-11-29 06:55:09.5056214 +0000 UTC m=+0.798690476 container attach 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:55:09 np0005539508 podman[266639]: 2025-11-29 06:55:09.506395772 +0000 UTC m=+0.799464878 container died 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:55:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:10 np0005539508 nova_compute[251877]: 2025-11-29 06:55:10.800 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 11.73 sec#033[00m
Nov 29 01:55:10 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:10 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:10 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:11 np0005539508 systemd[1]: var-lib-containers-storage-overlay-628117b89897d8dcb258a273753dfc758c5d3fc2fca1b65dd9bc72bdf8fd1b59-merged.mount: Deactivated successfully.
Nov 29 01:55:12 np0005539508 podman[266639]: 2025-11-29 06:55:12.848610452 +0000 UTC m=+4.141679528 container remove 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 01:55:12 np0005539508 systemd[1]: libpod-conmon-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope: Deactivated successfully.
Nov 29 01:55:12 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:12 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:12 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:13 np0005539508 podman[266681]: 2025-11-29 06:55:13.0208433 +0000 UTC m=+0.026736260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:55:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.286 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.286 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:13 np0005539508 podman[266681]: 2025-11-29 06:55:13.331165042 +0000 UTC m=+0.337057922 container create afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 01:55:13 np0005539508 systemd[1]: Started libpod-conmon-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope.
Nov 29 01:55:13 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:13 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:13 np0005539508 podman[266681]: 2025-11-29 06:55:13.534531433 +0000 UTC m=+0.540424353 container init afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:55:13 np0005539508 podman[266681]: 2025-11-29 06:55:13.547448995 +0000 UTC m=+0.553341865 container start afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 01:55:13 np0005539508 podman[266681]: 2025-11-29 06:55:13.634757483 +0000 UTC m=+0.640650393 container attach afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 01:55:14 np0005539508 nova_compute[251877]: 2025-11-29 06:55:14.076 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:14 np0005539508 nova_compute[251877]: 2025-11-29 06:55:14.077 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:55:14 np0005539508 nova_compute[251877]: 2025-11-29 06:55:14.079 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]: {
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:    "1": [
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:        {
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "devices": [
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "/dev/loop3"
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            ],
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "lv_name": "ceph_lv0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "lv_size": "7511998464",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "name": "ceph_lv0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "tags": {
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.cluster_name": "ceph",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.crush_device_class": "",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.encrypted": "0",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.osd_id": "1",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.type": "block",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:                "ceph.vdo": "0"
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            },
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "type": "block",
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:            "vg_name": "ceph_vg0"
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:        }
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]:    ]
Nov 29 01:55:14 np0005539508 relaxed_curran[266699]: }
Nov 29 01:55:14 np0005539508 systemd[1]: libpod-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope: Deactivated successfully.
Nov 29 01:55:14 np0005539508 podman[266681]: 2025-11-29 06:55:14.361982283 +0000 UTC m=+1.367875253 container died afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 01:55:14 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c-merged.mount: Deactivated successfully.
Nov 29 01:55:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:14 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:14 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:14 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.467 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.468 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.468 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.469 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.469 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:55:15 np0005539508 podman[266681]: 2025-11-29 06:55:15.94684235 +0000 UTC m=+2.952735220 container remove afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 01:55:15 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:55:15 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736452741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:55:15 np0005539508 nova_compute[251877]: 2025-11-29 06:55:15.984 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:55:16 np0005539508 systemd[1]: libpod-conmon-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope: Deactivated successfully.
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.192 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.194 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.194 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.195 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.754 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.756 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:55:16 np0005539508 nova_compute[251877]: 2025-11-29 06:55:16.784 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:55:16 np0005539508 podman[266886]: 2025-11-29 06:55:16.697334623 +0000 UTC m=+0.025094495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:16 np0005539508 podman[266886]: 2025-11-29 06:55:16.854057837 +0000 UTC m=+0.181817669 container create bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:55:16 np0005539508 systemd[1]: Started libpod-conmon-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope.
Nov 29 01:55:16 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:16 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:16 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:16 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:16.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:17 np0005539508 podman[266886]: 2025-11-29 06:55:17.096499975 +0000 UTC m=+0.424259847 container init bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:55:17 np0005539508 podman[266886]: 2025-11-29 06:55:17.104258262 +0000 UTC m=+0.432018084 container start bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 01:55:17 np0005539508 jovial_mccarthy[266922]: 167 167
Nov 29 01:55:17 np0005539508 systemd[1]: libpod-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope: Deactivated successfully.
Nov 29 01:55:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:17 np0005539508 podman[266886]: 2025-11-29 06:55:17.179653446 +0000 UTC m=+0.507413318 container attach bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 01:55:17 np0005539508 podman[266886]: 2025-11-29 06:55:17.180209172 +0000 UTC m=+0.507969004 container died bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 01:55:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:17 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:55:17 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/570431191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:55:17 np0005539508 nova_compute[251877]: 2025-11-29 06:55:17.226 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:55:17 np0005539508 nova_compute[251877]: 2025-11-29 06:55:17.234 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:55:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:55:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:55:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.250 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:55:18 np0005539508 nova_compute[251877]: 2025-11-29 06:55:18.285 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:55:18 np0005539508 nova_compute[251877]: 2025-11-29 06:55:18.288 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:55:18 np0005539508 nova_compute[251877]: 2025-11-29 06:55:18.288 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:55:18 np0005539508 systemd[1]: var-lib-containers-storage-overlay-d0a701160b6324f0e1216325ff493fa2c61a8ec820a8061bc3a409b403325f17-merged.mount: Deactivated successfully.
Nov 29 01:55:18 np0005539508 podman[266886]: 2025-11-29 06:55:18.579548696 +0000 UTC m=+1.907308548 container remove bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 01:55:18 np0005539508 systemd[1]: libpod-conmon-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope: Deactivated successfully.
Nov 29 01:55:18 np0005539508 podman[266942]: 2025-11-29 06:55:18.689553531 +0000 UTC m=+0.808466610 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 01:55:18 np0005539508 podman[267021]: 2025-11-29 06:55:18.739552013 +0000 UTC m=+0.021251197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:55:18 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:18 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:18 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:19 np0005539508 podman[267021]: 2025-11-29 06:55:19.008250176 +0000 UTC m=+0.289949340 container create e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:55:19 np0005539508 systemd[1]: Started libpod-conmon-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope.
Nov 29 01:55:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:19 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:55:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:19 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:55:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:19 np0005539508 podman[267021]: 2025-11-29 06:55:19.293086683 +0000 UTC m=+0.574785937 container init e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:55:19 np0005539508 podman[267021]: 2025-11-29 06:55:19.301296423 +0000 UTC m=+0.582995627 container start e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:55:19 np0005539508 podman[267021]: 2025-11-29 06:55:19.305401388 +0000 UTC m=+0.587100592 container attach e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 01:55:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]: {
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:        "osd_id": 1,
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:        "type": "bluestore"
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]:    }
Nov 29 01:55:20 np0005539508 xenodochial_pasteur[267038]: }
Nov 29 01:55:20 np0005539508 systemd[1]: libpod-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope: Deactivated successfully.
Nov 29 01:55:20 np0005539508 podman[267021]: 2025-11-29 06:55:20.258137982 +0000 UTC m=+1.539837186 container died e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:55:20 np0005539508 systemd[1]: var-lib-containers-storage-overlay-18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab-merged.mount: Deactivated successfully.
Nov 29 01:55:20 np0005539508 podman[267021]: 2025-11-29 06:55:20.367472007 +0000 UTC m=+1.649171211 container remove e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 01:55:20 np0005539508 systemd[1]: libpod-conmon-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope: Deactivated successfully.
Nov 29 01:55:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:55:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:20 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:55:20 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:20 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev c6920549-8192-4b33-84bc-a6cab230da56 does not exist
Nov 29 01:55:20 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 5f87753c-690a-4b6d-b7b9-ac670c19aff6 does not exist
Nov 29 01:55:20 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6c8a7437-2a72-470e-8fa2-0ff6f8899a10 does not exist
Nov 29 01:55:20 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:20 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:20 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:20.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:21 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:21 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:55:22 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:22 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:22 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:24 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:24 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:24 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:26.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:29.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:29 np0005539508 podman[267127]: 2025-11-29 06:55:29.130636352 +0000 UTC m=+0.088802941 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:29 np0005539508 podman[267128]: 2025-11-29 06:55:29.140494058 +0000 UTC m=+0.100463638 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 01:55:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:55:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:55:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:31.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:31.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:35.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:37.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:39.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:39 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:44 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:49.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:49 np0005539508 podman[267241]: 2025-11-29 06:55:49.138115868 +0000 UTC m=+0.094931582 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 29 01:55:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:51.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:53.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:53.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:55:54
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root', 'volumes']
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:55:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:55:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:55:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:55.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:55.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:55:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:55:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:55:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:55:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:55:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:55:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:00 np0005539508 podman[267318]: 2025-11-29 06:56:00.106124082 +0000 UTC m=+0.063041998 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 01:56:00 np0005539508 podman[267319]: 2025-11-29 06:56:00.144221981 +0000 UTC m=+0.105509660 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 01:56:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:05.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 01:56:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:05.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 01:56:05 np0005539508 nova_compute[251877]: 2025-11-29 06:56:05.485 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 4.68 sec#033[00m
Nov 29 01:56:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:56:07 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5513 writes, 5513 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1431 writes, 5940 keys, 1431 commit groups, 1.0 writes per commit group, ingest: 10.24 MB, 0.02 MB/s#012Interval WAL: 1431 writes, 1431 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.9      3.03              0.12        13    0.233       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6     28.1     23.4      4.63              0.42        12    0.386     60K   6313       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     16.9     18.1      7.66              0.54        25    0.306     60K   6313       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2     33.1     33.0      1.19              0.19         8    0.149     21K   1990       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     28.1     23.4      4.63              0.42        12    0.386     60K   6313       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.9      3.03              0.12        12    0.252       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 7.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 11.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000111 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(605,10.69 MB,3.51709%) FilterBlock(26,169.55 KB,0.0544648%) IndexBlock(26,319.33 KB,0.10258%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 01:56:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:07.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:09.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:09 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:11.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:11.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:13.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:56:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:56:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:56:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:15.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:56:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:17.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:56:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:56:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:56:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:17.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.291 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.292 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.498 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.700 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.703 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.703 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.705 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.705 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.706 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.821 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.822 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.822 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.823 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:56:18 np0005539508 nova_compute[251877]: 2025-11-29 06:56:18.823 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:56:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:19.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:19.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431858114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:56:19 np0005539508 nova_compute[251877]: 2025-11-29 06:56:19.306 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.377819) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379378209, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2079, "num_deletes": 251, "total_data_size": 3950000, "memory_usage": 4013304, "flush_reason": "Manual Compaction"}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379404042, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3885925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22800, "largest_seqno": 24878, "table_properties": {"data_size": 3876525, "index_size": 5958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18781, "raw_average_key_size": 20, "raw_value_size": 3857883, "raw_average_value_size": 4130, "num_data_blocks": 266, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764399150, "oldest_key_time": 1764399150, "file_creation_time": 1764399379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 26214 microseconds, and 9293 cpu microseconds.
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.404115) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3885925 bytes OK
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.404146) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.405911) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.406007) EVENT_LOG_v1 {"time_micros": 1764399379405998, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.406036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3941580, prev total WAL file size 3941580, number of live WAL files 2.
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.408208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3794KB)], [53(9042KB)]
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379408270, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 13144976, "oldest_snapshot_seqno": -1}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5293 keys, 11153902 bytes, temperature: kUnknown
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379484966, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11153902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11116072, "index_size": 23512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 133939, "raw_average_key_size": 25, "raw_value_size": 11017709, "raw_average_value_size": 2081, "num_data_blocks": 968, "num_entries": 5293, "num_filter_entries": 5293, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.485315) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11153902 bytes
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.486623) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.0 rd, 145.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5810, records dropped: 517 output_compression: NoCompression
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.486643) EVENT_LOG_v1 {"time_micros": 1764399379486633, "job": 28, "event": "compaction_finished", "compaction_time_micros": 76860, "compaction_time_cpu_micros": 25602, "output_level": 6, "num_output_files": 1, "total_output_size": 11153902, "num_input_records": 5810, "num_output_records": 5293, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379487581, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379489501, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.407991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 01:56:19 np0005539508 nova_compute[251877]: 2025-11-29 06:56:19.533 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:56:19 np0005539508 nova_compute[251877]: 2025-11-29 06:56:19.534 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:56:19 np0005539508 nova_compute[251877]: 2025-11-29 06:56:19.535 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:56:19 np0005539508 nova_compute[251877]: 2025-11-29 06:56:19.535 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:56:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:20 np0005539508 podman[267451]: 2025-11-29 06:56:20.108350687 +0000 UTC m=+0.075905228 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.503 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.504 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.600 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing inventories for resource provider 36ed0248-8d04-4532-95bb-daab89f12202 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.707 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating ProviderTree inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.708 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.727 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing aggregate associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.754 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing trait associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 01:56:20 np0005539508 nova_compute[251877]: 2025-11-29 06:56:20.782 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:56:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:21.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:56:21 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/886911526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:56:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:21 np0005539508 nova_compute[251877]: 2025-11-29 06:56:21.269 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:56:21 np0005539508 nova_compute[251877]: 2025-11-29 06:56:21.275 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:56:21 np0005539508 nova_compute[251877]: 2025-11-29 06:56:21.662 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:56:21 np0005539508 nova_compute[251877]: 2025-11-29 06:56:21.665 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:56:21 np0005539508 nova_compute[251877]: 2025-11-29 06:56:21.665 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 35e135d4-5986-45d0-81d5-1eff459e1465 does not exist
Nov 29 01:56:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 6b0359db-83da-475b-a13c-30e41d024927 does not exist
Nov 29 01:56:22 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev aed08f34-b9cf-4b98-ae4b-25a775fb7d8b does not exist
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:56:22 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:56:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.743514024 +0000 UTC m=+0.093821126 container create 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 01:56:23 np0005539508 systemd[1]: Started libpod-conmon-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope.
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.714245751 +0000 UTC m=+0.064552843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:23 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.845627998 +0000 UTC m=+0.195935080 container init 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.857666753 +0000 UTC m=+0.207973825 container start 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.861143959 +0000 UTC m=+0.211451061 container attach 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 01:56:23 np0005539508 loving_raman[267788]: 167 167
Nov 29 01:56:23 np0005539508 systemd[1]: libpod-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope: Deactivated successfully.
Nov 29 01:56:23 np0005539508 conmon[267788]: conmon 9818aa1b275f3b8e1553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope/container/memory.events
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.868435652 +0000 UTC m=+0.218742764 container died 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 01:56:23 np0005539508 systemd[1]: var-lib-containers-storage-overlay-14f803234a0a9a9d62e4b6f598794b215fc053b0c29a5d5baa4a701c3835ded2-merged.mount: Deactivated successfully.
Nov 29 01:56:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:23 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:56:23 np0005539508 podman[267772]: 2025-11-29 06:56:23.924665883 +0000 UTC m=+0.274972995 container remove 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 01:56:23 np0005539508 systemd[1]: libpod-conmon-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope: Deactivated successfully.
Nov 29 01:56:24 np0005539508 podman[267812]: 2025-11-29 06:56:24.162852055 +0000 UTC m=+0.085321480 container create db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 01:56:24 np0005539508 systemd[1]: Started libpod-conmon-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope.
Nov 29 01:56:24 np0005539508 podman[267812]: 2025-11-29 06:56:24.133789278 +0000 UTC m=+0.056258663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:24 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:24 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:24 np0005539508 podman[267812]: 2025-11-29 06:56:24.281574691 +0000 UTC m=+0.204044046 container init db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:56:24 np0005539508 podman[267812]: 2025-11-29 06:56:24.29595763 +0000 UTC m=+0.218426945 container start db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 01:56:24 np0005539508 podman[267812]: 2025-11-29 06:56:24.301266918 +0000 UTC m=+0.223736283 container attach db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:25 np0005539508 hungry_noether[267828]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:56:25 np0005539508 hungry_noether[267828]: --> relative data size: 1.0
Nov 29 01:56:25 np0005539508 hungry_noether[267828]: --> All data devices are unavailable
Nov 29 01:56:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:25 np0005539508 systemd[1]: libpod-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope: Deactivated successfully.
Nov 29 01:56:25 np0005539508 podman[267812]: 2025-11-29 06:56:25.179432217 +0000 UTC m=+1.101901542 container died db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 01:56:25 np0005539508 systemd[1]: var-lib-containers-storage-overlay-621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c-merged.mount: Deactivated successfully.
Nov 29 01:56:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:25 np0005539508 podman[267812]: 2025-11-29 06:56:25.423861981 +0000 UTC m=+1.346331316 container remove db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:56:25 np0005539508 systemd[1]: libpod-conmon-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope: Deactivated successfully.
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.134328935 +0000 UTC m=+0.027844914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.26638244 +0000 UTC m=+0.159898439 container create 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:56:26 np0005539508 systemd[1]: Started libpod-conmon-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope.
Nov 29 01:56:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.35171501 +0000 UTC m=+0.245230989 container init 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.357975954 +0000 UTC m=+0.251491923 container start 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:56:26 np0005539508 funny_einstein[268011]: 167 167
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.361365688 +0000 UTC m=+0.254881677 container attach 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 01:56:26 np0005539508 systemd[1]: libpod-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope: Deactivated successfully.
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.362179011 +0000 UTC m=+0.255694970 container died 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:56:26 np0005539508 systemd[1]: var-lib-containers-storage-overlay-8c21f3ca35695c0f7a50d7bb34e8fedbca99f94108739f12f5bfea8723dea9df-merged.mount: Deactivated successfully.
Nov 29 01:56:26 np0005539508 podman[267995]: 2025-11-29 06:56:26.581868089 +0000 UTC m=+0.475384058 container remove 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 01:56:26 np0005539508 systemd[1]: libpod-conmon-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope: Deactivated successfully.
Nov 29 01:56:26 np0005539508 podman[268037]: 2025-11-29 06:56:26.800161999 +0000 UTC m=+0.048687672 container create 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:56:26 np0005539508 systemd[1]: Started libpod-conmon-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope.
Nov 29 01:56:26 np0005539508 podman[268037]: 2025-11-29 06:56:26.775045852 +0000 UTC m=+0.023571615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:26 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:26 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:26 np0005539508 podman[268037]: 2025-11-29 06:56:26.920389727 +0000 UTC m=+0.168915440 container init 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:56:26 np0005539508 podman[268037]: 2025-11-29 06:56:26.930399065 +0000 UTC m=+0.178924778 container start 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:56:26 np0005539508 podman[268037]: 2025-11-29 06:56:26.934514179 +0000 UTC m=+0.183039872 container attach 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:56:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:27.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:27 np0005539508 amazing_morse[268053]: {
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:    "1": [
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:        {
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "devices": [
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "/dev/loop3"
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            ],
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "lv_name": "ceph_lv0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "lv_size": "7511998464",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "name": "ceph_lv0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "tags": {
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.cluster_name": "ceph",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.crush_device_class": "",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.encrypted": "0",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.osd_id": "1",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.type": "block",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:                "ceph.vdo": "0"
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            },
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "type": "block",
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:            "vg_name": "ceph_vg0"
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:        }
Nov 29 01:56:27 np0005539508 amazing_morse[268053]:    ]
Nov 29 01:56:27 np0005539508 amazing_morse[268053]: }
Nov 29 01:56:27 np0005539508 systemd[1]: libpod-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope: Deactivated successfully.
Nov 29 01:56:27 np0005539508 podman[268037]: 2025-11-29 06:56:27.682970598 +0000 UTC m=+0.931496301 container died 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 01:56:27 np0005539508 systemd[1]: var-lib-containers-storage-overlay-b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013-merged.mount: Deactivated successfully.
Nov 29 01:56:27 np0005539508 podman[268037]: 2025-11-29 06:56:27.76518899 +0000 UTC m=+1.013714703 container remove 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:56:27 np0005539508 systemd[1]: libpod-conmon-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope: Deactivated successfully.
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.475599602 +0000 UTC m=+0.074408247 container create 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:56:28 np0005539508 systemd[1]: Started libpod-conmon-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope.
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.433859793 +0000 UTC m=+0.032668518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.55583735 +0000 UTC m=+0.154646045 container init 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.567300308 +0000 UTC m=+0.166108993 container start 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:56:28 np0005539508 xenodochial_fermi[268232]: 167 167
Nov 29 01:56:28 np0005539508 systemd[1]: libpod-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope: Deactivated successfully.
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.576471283 +0000 UTC m=+0.175279958 container attach 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.577021958 +0000 UTC m=+0.175830593 container died 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:56:28 np0005539508 systemd[1]: var-lib-containers-storage-overlay-5eebab067f95d6b1ec4d3f228680d5f8c36a657802fa63a404231c993c9aae38-merged.mount: Deactivated successfully.
Nov 29 01:56:28 np0005539508 podman[268215]: 2025-11-29 06:56:28.682560448 +0000 UTC m=+0.281369103 container remove 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:56:28 np0005539508 systemd[1]: libpod-conmon-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope: Deactivated successfully.
Nov 29 01:56:28 np0005539508 podman[268256]: 2025-11-29 06:56:28.859701165 +0000 UTC m=+0.038361466 container create a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 01:56:28 np0005539508 systemd[1]: Started libpod-conmon-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope.
Nov 29 01:56:28 np0005539508 podman[268256]: 2025-11-29 06:56:28.843071404 +0000 UTC m=+0.021731665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:56:28 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:56:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:28 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:56:28 np0005539508 podman[268256]: 2025-11-29 06:56:28.976826637 +0000 UTC m=+0.155486918 container init a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:56:28 np0005539508 podman[268256]: 2025-11-29 06:56:28.986046253 +0000 UTC m=+0.164706564 container start a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:56:29 np0005539508 podman[268256]: 2025-11-29 06:56:29.039470045 +0000 UTC m=+0.218130376 container attach a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:56:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:56:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:29.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:56:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:56:29 np0005539508 youthful_bell[268273]: {
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:        "osd_id": 1,
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:        "type": "bluestore"
Nov 29 01:56:29 np0005539508 youthful_bell[268273]:    }
Nov 29 01:56:29 np0005539508 youthful_bell[268273]: }
Nov 29 01:56:29 np0005539508 systemd[1]: libpod-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope: Deactivated successfully.
Nov 29 01:56:29 np0005539508 podman[268256]: 2025-11-29 06:56:29.941129986 +0000 UTC m=+1.119790257 container died a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 01:56:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:30 np0005539508 systemd[1]: var-lib-containers-storage-overlay-27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118-merged.mount: Deactivated successfully.
Nov 29 01:56:30 np0005539508 podman[268256]: 2025-11-29 06:56:30.582003508 +0000 UTC m=+1.760663809 container remove a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 01:56:30 np0005539508 systemd[1]: libpod-conmon-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope: Deactivated successfully.
Nov 29 01:56:30 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:56:30 np0005539508 podman[268308]: 2025-11-29 06:56:30.719485634 +0000 UTC m=+0.085682009 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:56:30 np0005539508 podman[268309]: 2025-11-29 06:56:30.794091626 +0000 UTC m=+0.158733828 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 01:56:31 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:56:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:31 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:31 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 238f2672-0826-4f72-87b6-7e211a794709 does not exist
Nov 29 01:56:31 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 8425c4c4-3b6f-4678-ad4d-b9f04aaeedfa does not exist
Nov 29 01:56:31 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 779da6db-0c1a-4398-9914-ffdc241f16cb does not exist
Nov 29 01:56:33 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:33 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:56:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:33.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:56:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:56:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:35.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:35 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:39.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:56:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:39.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:56:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:41.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:43.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:45.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:47.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:47.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:49.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:49.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:51 np0005539508 podman[268469]: 2025-11-29 06:56:51.151360506 +0000 UTC m=+0.108080751 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 01:56:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:53.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:56:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:53.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:56:54
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'vms', 'default.rgw.meta', 'volumes', 'images']
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:56:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:56:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:56:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:57.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:56:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:56:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:56:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:56:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:59.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:01 np0005539508 podman[268550]: 2025-11-29 06:57:01.112710684 +0000 UTC m=+0.094798393 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:57:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:01.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:01 np0005539508 podman[268551]: 2025-11-29 06:57:01.162945428 +0000 UTC m=+0.141889340 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 01:57:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:01.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:03.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:03.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:05.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:09.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:57:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:11.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:57:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:11 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:11.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:57:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:57:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:13.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:15.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:16 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:57:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.250 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:57:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.251 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:57:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:17.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:19.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:21.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:21 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:21 np0005539508 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:21 np0005539508 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:21 np0005539508 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:57:21 np0005539508 nova_compute[251877]: 2025-11-29 06:57:21.668 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:57:22 np0005539508 podman[268656]: 2025-11-29 06:57:22.083677303 +0000 UTC m=+0.053129686 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:57:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:23.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:23.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:26 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.762 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.762 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:57:26 np0005539508 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.172 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.174 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:57:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:27 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:57:27 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769306037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.795 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.949 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5195MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:57:27 np0005539508 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.314 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.314 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.331 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:57:28 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:57:28 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526359832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.755 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.760 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.881 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.882 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:57:28 np0005539508 nova_compute[251877]: 2025-11-29 06:57:28.883 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:57:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:29.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:29.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:57:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.169 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:31.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.337 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.338 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.338 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:57:31 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.506 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:57:31 np0005539508 nova_compute[251877]: 2025-11-29 06:57:31.507 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:57:31 np0005539508 podman[268752]: 2025-11-29 06:57:31.967592811 +0000 UTC m=+0.053183117 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 01:57:31 np0005539508 podman[268753]: 2025-11-29 06:57:31.991167756 +0000 UTC m=+0.075988931 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:32 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 20167e10-ca0a-443e-9dd6-d627c331691a does not exist
Nov 29 01:57:32 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev c8f264a3-2e06-4f3e-91c0-309ea57b7fbd does not exist
Nov 29 01:57:32 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 51bda47d-7748-457e-9df3-357005b1281d does not exist
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:57:32 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:57:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:33.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:33.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:33 np0005539508 podman[269045]: 2025-11-29 06:57:33.513562799 +0000 UTC m=+0.041371160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:33 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 01:57:33 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:33 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 01:57:33 np0005539508 podman[269045]: 2025-11-29 06:57:33.85329451 +0000 UTC m=+0.381102791 container create bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:57:33 np0005539508 systemd[1]: Started libpod-conmon-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope.
Nov 29 01:57:33 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:34 np0005539508 podman[269045]: 2025-11-29 06:57:34.019352589 +0000 UTC m=+0.547160880 container init bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:57:34 np0005539508 podman[269045]: 2025-11-29 06:57:34.025628974 +0000 UTC m=+0.553437245 container start bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:57:34 np0005539508 systemd[1]: libpod-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope: Deactivated successfully.
Nov 29 01:57:34 np0005539508 lucid_tesla[269061]: 167 167
Nov 29 01:57:34 np0005539508 conmon[269061]: conmon bb5e782bcd1f1d815c5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope/container/memory.events
Nov 29 01:57:34 np0005539508 podman[269045]: 2025-11-29 06:57:34.060585935 +0000 UTC m=+0.588394226 container attach bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:57:34 np0005539508 podman[269045]: 2025-11-29 06:57:34.064223195 +0000 UTC m=+0.592031476 container died bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:57:34 np0005539508 systemd[1]: var-lib-containers-storage-overlay-a71e8f1a99914d4ec1ede94c28a7b36f7b27f448fd8e2662a7d481aa2b8c9a97-merged.mount: Deactivated successfully.
Nov 29 01:57:34 np0005539508 podman[269045]: 2025-11-29 06:57:34.202992448 +0000 UTC m=+0.730800719 container remove bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 01:57:34 np0005539508 systemd[1]: libpod-conmon-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope: Deactivated successfully.
Nov 29 01:57:34 np0005539508 podman[269086]: 2025-11-29 06:57:34.35289487 +0000 UTC m=+0.034819898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:34 np0005539508 podman[269086]: 2025-11-29 06:57:34.493964116 +0000 UTC m=+0.175889164 container create e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 01:57:34 np0005539508 systemd[1]: Started libpod-conmon-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope.
Nov 29 01:57:34 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:34 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:34 np0005539508 podman[269086]: 2025-11-29 06:57:34.666990119 +0000 UTC m=+0.348915147 container init e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 01:57:34 np0005539508 podman[269086]: 2025-11-29 06:57:34.676702299 +0000 UTC m=+0.358627297 container start e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:57:34 np0005539508 podman[269086]: 2025-11-29 06:57:34.707506074 +0000 UTC m=+0.389431112 container attach e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:57:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:35 np0005539508 sweet_ellis[269103]: --> passed data devices: 0 physical, 1 LVM
Nov 29 01:57:35 np0005539508 sweet_ellis[269103]: --> relative data size: 1.0
Nov 29 01:57:35 np0005539508 sweet_ellis[269103]: --> All data devices are unavailable
Nov 29 01:57:35 np0005539508 systemd[1]: libpod-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope: Deactivated successfully.
Nov 29 01:57:35 np0005539508 podman[269086]: 2025-11-29 06:57:35.546707492 +0000 UTC m=+1.228632550 container died e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 01:57:36 np0005539508 systemd[1]: var-lib-containers-storage-overlay-4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6-merged.mount: Deactivated successfully.
Nov 29 01:57:36 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:37 np0005539508 podman[269086]: 2025-11-29 06:57:37.212943038 +0000 UTC m=+2.894868076 container remove e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 01:57:37 np0005539508 systemd[1]: libpod-conmon-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope: Deactivated successfully.
Nov 29 01:57:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:37 np0005539508 podman[269276]: 2025-11-29 06:57:37.918729371 +0000 UTC m=+0.108120842 container create 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 01:57:37 np0005539508 podman[269276]: 2025-11-29 06:57:37.8423076 +0000 UTC m=+0.031699131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:37 np0005539508 systemd[1]: Started libpod-conmon-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope.
Nov 29 01:57:37 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:38 np0005539508 podman[269276]: 2025-11-29 06:57:38.013029929 +0000 UTC m=+0.202421400 container init 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 01:57:38 np0005539508 podman[269276]: 2025-11-29 06:57:38.022389779 +0000 UTC m=+0.211781250 container start 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 01:57:38 np0005539508 podman[269276]: 2025-11-29 06:57:38.026308568 +0000 UTC m=+0.215700039 container attach 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:57:38 np0005539508 great_proskuriakova[269292]: 167 167
Nov 29 01:57:38 np0005539508 systemd[1]: libpod-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope: Deactivated successfully.
Nov 29 01:57:38 np0005539508 podman[269276]: 2025-11-29 06:57:38.030206826 +0000 UTC m=+0.219598327 container died 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 01:57:38 np0005539508 systemd[1]: var-lib-containers-storage-overlay-463c3b21b02fcb1945181f96e2b455622477f5dd9b1824765528adcba36b455f-merged.mount: Deactivated successfully.
Nov 29 01:57:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:39.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:40 np0005539508 podman[269276]: 2025-11-29 06:57:40.183728949 +0000 UTC m=+2.373120450 container remove 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 01:57:40 np0005539508 systemd[1]: libpod-conmon-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope: Deactivated successfully.
Nov 29 01:57:40 np0005539508 podman[269369]: 2025-11-29 06:57:40.393176795 +0000 UTC m=+0.045073972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:41 np0005539508 podman[269369]: 2025-11-29 06:57:41.065435287 +0000 UTC m=+0.717332434 container create efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:57:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:41.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:41 np0005539508 systemd[1]: Started libpod-conmon-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope.
Nov 29 01:57:41 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:41 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:41 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:41 np0005539508 podman[269369]: 2025-11-29 06:57:41.725558422 +0000 UTC m=+1.377455599 container init efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 01:57:41 np0005539508 podman[269369]: 2025-11-29 06:57:41.73557491 +0000 UTC m=+1.387472077 container start efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]: {
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:    "1": [
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:        {
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "devices": [
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "/dev/loop3"
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            ],
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "lv_name": "ceph_lv0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "lv_size": "7511998464",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "name": "ceph_lv0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "tags": {
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.cephx_lockbox_secret": "",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.cluster_name": "ceph",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.crush_device_class": "",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.encrypted": "0",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.osd_id": "1",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.type": "block",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:                "ceph.vdo": "0"
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            },
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "type": "block",
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:            "vg_name": "ceph_vg0"
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:        }
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]:    ]
Nov 29 01:57:42 np0005539508 intelligent_mestorf[269388]: }
Nov 29 01:57:42 np0005539508 systemd[1]: libpod-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope: Deactivated successfully.
Nov 29 01:57:42 np0005539508 podman[269369]: 2025-11-29 06:57:42.598017603 +0000 UTC m=+2.249914770 container attach efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:57:42 np0005539508 podman[269369]: 2025-11-29 06:57:42.601314714 +0000 UTC m=+2.253211951 container died efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 01:57:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:45.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:45 np0005539508 systemd[1]: var-lib-containers-storage-overlay-65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729-merged.mount: Deactivated successfully.
Nov 29 01:57:45 np0005539508 podman[269369]: 2025-11-29 06:57:45.787474986 +0000 UTC m=+5.439372133 container remove efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 01:57:45 np0005539508 systemd[1]: libpod-conmon-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope: Deactivated successfully.
Nov 29 01:57:46 np0005539508 podman[269553]: 2025-11-29 06:57:46.375046679 +0000 UTC m=+0.022847096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:47.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:47 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:47 np0005539508 podman[269553]: 2025-11-29 06:57:47.469218413 +0000 UTC m=+1.117018820 container create 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:57:48 np0005539508 systemd[1]: Started libpod-conmon-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope.
Nov 29 01:57:48 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:48 np0005539508 podman[269553]: 2025-11-29 06:57:48.435172769 +0000 UTC m=+2.082973186 container init 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 01:57:48 np0005539508 podman[269553]: 2025-11-29 06:57:48.442541763 +0000 UTC m=+2.090342180 container start 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 01:57:48 np0005539508 clever_perlman[269572]: 167 167
Nov 29 01:57:48 np0005539508 systemd[1]: libpod-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope: Deactivated successfully.
Nov 29 01:57:48 np0005539508 podman[269553]: 2025-11-29 06:57:48.922146259 +0000 UTC m=+2.569946706 container attach 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 01:57:48 np0005539508 podman[269553]: 2025-11-29 06:57:48.923346122 +0000 UTC m=+2.571146559 container died 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 01:57:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:51 np0005539508 systemd[1]: var-lib-containers-storage-overlay-c18a208ade45cba741bcbbf9e70905ae1027b330909e9ff04653a927a455ea84-merged.mount: Deactivated successfully.
Nov 29 01:57:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:53.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:54 np0005539508 podman[269553]: 2025-11-29 06:57:54.013610824 +0000 UTC m=+7.661411261 container remove 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:57:54 np0005539508 systemd[1]: libpod-conmon-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope: Deactivated successfully.
Nov 29 01:57:54 np0005539508 podman[269593]: 2025-11-29 06:57:54.131043814 +0000 UTC m=+1.086252177 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:57:54
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups']
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:57:54 np0005539508 podman[269621]: 2025-11-29 06:57:54.254347618 +0000 UTC m=+0.033605324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:57:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:57:54 np0005539508 podman[269621]: 2025-11-29 06:57:54.750524492 +0000 UTC m=+0.529782118 container create f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:57:54 np0005539508 systemd[1]: Started libpod-conmon-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope.
Nov 29 01:57:54 np0005539508 systemd[1]: Started libcrun container.
Nov 29 01:57:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:54 np0005539508 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 01:57:54 np0005539508 podman[269621]: 2025-11-29 06:57:54.960767148 +0000 UTC m=+0.740024774 container init f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 01:57:54 np0005539508 podman[269621]: 2025-11-29 06:57:54.968011089 +0000 UTC m=+0.747268725 container start f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 01:57:55 np0005539508 podman[269621]: 2025-11-29 06:57:55.025167746 +0000 UTC m=+0.804425382 container attach f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 01:57:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:55.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:55 np0005539508 gallant_ride[269638]: {
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:    "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:        "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:        "osd_id": 1,
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:        "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:        "type": "bluestore"
Nov 29 01:57:55 np0005539508 gallant_ride[269638]:    }
Nov 29 01:57:55 np0005539508 gallant_ride[269638]: }
Nov 29 01:57:55 np0005539508 systemd[1]: libpod-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope: Deactivated successfully.
Nov 29 01:57:55 np0005539508 podman[269621]: 2025-11-29 06:57:55.905040292 +0000 UTC m=+1.684297918 container died f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 01:57:56 np0005539508 systemd[1]: var-lib-containers-storage-overlay-cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe-merged.mount: Deactivated successfully.
Nov 29 01:57:56 np0005539508 podman[269621]: 2025-11-29 06:57:56.537248143 +0000 UTC m=+2.316505769 container remove f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 01:57:56 np0005539508 systemd[1]: libpod-conmon-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope: Deactivated successfully.
Nov 29 01:57:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 01:57:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 01:57:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 4d60daf7-9d2e-4d39-9e0a-154dc064fd36 does not exist
Nov 29 01:57:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 50130242-358e-4b71-b19e-6498f0ba4704 does not exist
Nov 29 01:57:56 np0005539508 ceph-mgr[74948]: [progress WARNING root] complete: ev 28d220f8-d8ef-44a3-aa9a-60912dcde1d1 does not exist
Nov 29 01:57:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:57.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:57:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:57 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:57:58 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:57:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:57:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:57:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:59.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:57:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:57:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:57:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:59.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:01.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:01.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:02 np0005539508 podman[269784]: 2025-11-29 06:58:02.088681508 +0000 UTC m=+0.050711188 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 01:58:02 np0005539508 podman[269785]: 2025-11-29 06:58:02.118160616 +0000 UTC m=+0.080221557 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 01:58:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:03.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:03.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:05.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:05.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:07 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:07 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:07 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:07 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:08 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:09 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:09 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:09 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:09 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:11 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:11 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:11 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:11 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 01:58:13 np0005539508 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 01:58:13 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:13 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:13 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:14 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:15 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:15 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:15 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:15 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:17 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.251 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:58:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.253 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:58:17 np0005539508 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.253 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:58:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:17 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:17 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:17 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:18 np0005539508 nova_compute[251877]: 2025-11-29 06:58:18.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:18 np0005539508 nova_compute[251877]: 2025-11-29 06:58:18.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:18 np0005539508 nova_compute[251877]: 2025-11-29 06:58:18.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 01:58:19 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:19 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:19 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:19 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:19 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:20 np0005539508 nova_compute[251877]: 2025-11-29 06:58:20.047 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:20 np0005539508 nova_compute[251877]: 2025-11-29 06:58:20.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:21 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:21 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:21 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:21 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:22 np0005539508 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:58:22 np0005539508 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:58:22 np0005539508 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:58:22 np0005539508 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 01:58:22 np0005539508 nova_compute[251877]: 2025-11-29 06:58:22.192 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:58:23 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:58:23 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705562816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.194 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.002s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:58:23 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.376 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.378 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.378 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.379 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 01:58:23 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:23 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:23 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:23.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.761 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.761 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 01:58:23 np0005539508 nova_compute[251877]: 2025-11-29 06:58:23.791 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 01:58:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 01:58:24 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131628032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.310 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.318 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:24 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.457 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.459 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.459 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.460 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.460 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 01:58:24 np0005539508 nova_compute[251877]: 2025-11-29 06:58:24.692 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 01:58:24 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:25 np0005539508 podman[269936]: 2025-11-29 06:58:25.084696263 +0000 UTC m=+0.067037132 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 01:58:25 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:25 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:25 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:25 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:25 np0005539508 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:25 np0005539508 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:25 np0005539508 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:25 np0005539508 nova_compute[251877]: 2025-11-29 06:58:25.694 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 01:58:25 np0005539508 nova_compute[251877]: 2025-11-29 06:58:25.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:26 np0005539508 nova_compute[251877]: 2025-11-29 06:58:26.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:26 np0005539508 nova_compute[251877]: 2025-11-29 06:58:26.958 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 01:58:26 np0005539508 nova_compute[251877]: 2025-11-29 06:58:26.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 01:58:26 np0005539508 nova_compute[251877]: 2025-11-29 06:58:26.975 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 01:58:27 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:27 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:27 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:27 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:27 np0005539508 nova_compute[251877]: 2025-11-29 06:58:27.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:29 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:29 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:29 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 01:58:29 np0005539508 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 01:58:29 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:31 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:31.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:31 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:31 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:31 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:32 np0005539508 nova_compute[251877]: 2025-11-29 06:58:32.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 01:58:33 np0005539508 podman[269960]: 2025-11-29 06:58:33.14046711 +0000 UTC m=+0.090317028 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:58:33 np0005539508 podman[269961]: 2025-11-29 06:58:33.189665936 +0000 UTC m=+0.134693351 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 01:58:33 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:33 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:33 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:33 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:34 np0005539508 systemd-logind[797]: New session 52 of user zuul.
Nov 29 01:58:34 np0005539508 systemd[1]: Started Session 52 of User zuul.
Nov 29 01:58:34 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:35 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:35 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:35 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:35 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:58:35 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.8 total, 600.0 interval#012Cumulative writes: 9755 writes, 36K keys, 9755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9755 writes, 2327 syncs, 4.19 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 561 writes, 878 keys, 561 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 561 writes, 253 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 01:58:37 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24755 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:37 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:37 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:37 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:37 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:37 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24761 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:37 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14961 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:38 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14967 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:38 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24820 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:38 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 01:58:38 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977521267' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 01:58:39 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 01:58:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 01:58:39 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:39 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:39 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:39 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24826 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:40 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:41 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:41 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:41 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:41 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:43 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:43 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:43 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:43 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:43.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:44 np0005539508 ovs-vsctl[270391]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 01:58:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:45 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:45 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:45 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:45 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:45.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:45 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24776 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:45 np0005539508 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 01:58:45 np0005539508 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 01:58:45 np0005539508 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 01:58:45 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 01:58:45 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 01:58:46 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24835 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:46 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24841 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:46 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: cache status {prefix=cache status} (starting...)
Nov 29 01:58:46 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:46 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: client ls {prefix=client ls} (starting...)
Nov 29 01:58:46 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:46 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24853 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:46 np0005539508 lvm[270751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 01:58:46 np0005539508 lvm[270751]: VG ceph_vg0 finished
Nov 29 01:58:46 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24803 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:46 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:46 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:46.951+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:46 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 01:58:46 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:47 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14982 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:47 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:47.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:47 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:47 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:47 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:47.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:47 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24880 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:47 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:47.453+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:47 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 01:58:47 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24836 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14994 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24910 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: ops {prefix=ops} (starting...)
Nov 29 01:58:48 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24848 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:48 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:49 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:49 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: session ls {prefix=session ls} (starting...)
Nov 29 01:58:49 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 01:58:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:49.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:49 np0005539508 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: status {prefix=status} (starting...)
Nov 29 01:58:49 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:49 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:49.385+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:49 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 01:58:49 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:49 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:49 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758656299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 01:58:49 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844408131' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:50 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24970 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:50.237+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:50 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:50 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24893 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:50.247+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:50 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662226395' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145305371' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003957364' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 01:58:50 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120183002' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 01:58:51 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:51 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:51 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:51 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890585 data_alloc: 218103808 data_used: 282624
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 6381568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 133 handle_osd_map epochs [133,134], i have 134, src has [1,134]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.557994 2 0.000098
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.559969 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8a000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8a000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 3.733084 4 0.000496
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000038 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892679 data_alloc: 218103808 data_used: 282624
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 6356992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8c000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 6356992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892679 data_alloc: 218103808 data_used: 282624
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8c000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 6340608 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 6340608 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.667863846s of 13.856606483s, submitted: 12
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f(unlocked)] enter Initial
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=0 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=0 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000044
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000301 1 0.000092
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000363 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 6307840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898207 data_alloc: 218103808 data_used: 290816
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 6283264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 6283264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 6275072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,136], i have 136, src has [1,136]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 5.839487 2 0.000169
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 5.840039 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 5.840112 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000406 1 0.000660
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000056 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901853 data_alloc: 218103808 data_used: 290816
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.365139961s of 10.466350555s, submitted: 5
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 136 heartbeat osd_stat(store_statfs(0x1bca84000/0x0/0x1bfc00000, data 0xd105e/0x198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 4.826138 5 0.000216
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904155 data_alloc: 218103808 data_used: 290816
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 6332416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.481476 4 0.000502
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000169 1 0.000062
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 6332416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 1.536255 1 0.000165
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 137 heartbeat osd_stat(store_statfs(0x1bca82000/0x0/0x1bfc00000, data 0xd2bf4/0x19c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910988 data_alloc: 218103808 data_used: 290816
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 137 heartbeat osd_stat(store_statfs(0x1bca82000/0x0/0x1bfc00000, data 0xd2bf4/0x19c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 6307840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915162 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.259155273s of 12.010793686s, submitted: 14
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 8.198978 1 0.000058
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 11.217182 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started 16.043512 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] exit Reset 0.000278 1 0.000432
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Start
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] exit Start 0.000073 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000274
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001825 3 0.000127
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 2.255414 2 0.000114
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 2.257459 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7a000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.741571 4 0.000202
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000039 0 0.000000
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 6250496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 6250496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 6217728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 6217728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 6209536 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 6209536 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 6201344 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 6201344 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 6184960 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 6176768 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 6176768 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 6160384 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 6160384 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 6152192 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 6152192 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 6127616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 6127616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 6119424 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 6119424 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 6103040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 6103040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 6086656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 6086656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 6070272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 6070272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 6062080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 6062080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 6045696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 6045696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 6029312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 6021120 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 6012928 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 6012928 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 5988352 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 5988352 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 5980160 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 5980160 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 5947392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 5947392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 5939200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 5939200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 5906432 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 5906432 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 5898240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 5898240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 5881856 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 5881856 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 5873664 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 5873664 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 5865472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 5865472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 5857280 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 5857280 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 5849088 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 5849088 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 5840896 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 5840896 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 5824512 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 5824512 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 5808128 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 5808128 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 5791744 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 5791744 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 5783552 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 5783552 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 5758976 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 5758976 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 5742592 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 5742592 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 5726208 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.8 total, 600.0 interval#012Cumulative writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s#012Interval WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slo
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 5660672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 5660672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 5652480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 5652480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 5636096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 5627904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 5619712 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 5619712 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 5578752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 5578752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 5562368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 5562368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5554176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5554176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 5529600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 5529600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 5521408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 5521408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 5496832 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 5496832 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 5480448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 5480448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 5472256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 5472256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 5464064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 5455872 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 5455872 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5439488 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5439488 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 5431296 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 5431296 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 5414912 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 5414912 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 5398528 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 5398528 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 5390336 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 5390336 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 5382144 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 5382144 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 5365760 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 5365760 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 5357568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 5357568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 5341184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 5341184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 5332992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 5332992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 5308416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 5308416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 5292032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 5292032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 5275648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 5275648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 5267456 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 5259264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 5259264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 5251072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 5251072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 5234688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 5234688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 5226496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 5226496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 294.404235840s of 297.958953857s, submitted: 12
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 5103616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 5021696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917400 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 5021696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4972544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4972544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 4997120 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 4980736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917328 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 4980736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: mgrc ms_handle_reset ms_handle_reset con 0x5633f09adc00
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1221624088
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1221624088,v1:192.168.122.100:6801/1221624088]
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: mgrc handle_mgr_configure stats_period=5
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f13f6c00 session 0x5633f0947c20
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4628480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4628480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:52 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.8 total, 600.0 interval#012Cumulative writes: 8512 writes, 34K keys, 8512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8512 writes, 1746 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 628 writes, 988 keys, 628 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s#012Interval WAL: 628 writes, 295 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 594.422546387s of 600.483703613s, submitted: 333
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4268032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.8 total, 600.0 interval#012Cumulative writes: 9194 writes, 35K keys, 9194 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9194 writes, 2074 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 682 writes, 1062 keys, 682 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s#012Interval WAL: 682 writes, 328 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f3a4e800 session 0x5633f43e14a0
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f3515800 session 0x5633f1da8d20
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.286376953s of 600.261535645s, submitted: 354
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 3817472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3588096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3538944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,0,1,0,1])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3497984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.8 total, 600.0 interval#012Cumulative writes: 9755 writes, 36K keys, 9755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9755 writes, 2327 syncs, 4.19 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 561 writes, 878 keys, 561 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 561 writes, 253 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3260416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'config diff' '{prefix=config diff}'
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'config show' '{prefix=config show}'
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2826240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2793472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 01:58:53 np0005539508 ceph-osd[85162]: do_command 'log dump' '{prefix=log dump}'
Nov 29 01:58:53 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:53 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:53 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:53 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:53 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:53 np0005539508 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:58:53 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 01:58:53 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594614874' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 01:58:53 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15093 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783031133' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24944 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:58:54
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.control']
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25021 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423480369' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25033 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 01:58:54 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634521314' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:58:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 01:58:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117577449' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25039 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:55 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:55 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:55 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:55 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 01:58:55 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363659733' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15147 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:55 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:55.740+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 01:58:55 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15153 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24983 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 01:58:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186939961' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 01:58:56 np0005539508 podman[271735]: 2025-11-29 06:58:56.215116218 +0000 UTC m=+0.176993595 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15168 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24989 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 01:58:56 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606582697' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:56 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24998 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25096 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:57.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:57 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:57 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:58:57 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25010 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25105 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:57 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:58 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:58 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:58.340+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:58 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:58 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:58:58 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:58.669+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:58 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523353195' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 01:58:59 np0005539508 systemd[1]: Starting Hostname Service...
Nov 29 01:58:59 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:58:59 np0005539508 systemd[1]: Started Hostname Service.
Nov 29 01:58:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:59.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:59 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15207 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:59 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:58:59 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:58:59 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840177804' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 01:58:59 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:58:59 np0005539508 podman[272069]: 2025-11-29 06:58:59.824154929 +0000 UTC m=+1.778225016 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 01:58:59 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95145201' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 01:59:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:59:00 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15231 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 01:59:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 01:59:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283979250' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 01:59:00 np0005539508 podman[272258]: 2025-11-29 06:59:00.500057073 +0000 UTC m=+0.580307601 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 01:59:00 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15249 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:00 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 01:59:00 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871154065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 01:59:00 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15261 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:01 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:59:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 01:59:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 01:59:01 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 01:59:01 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338702776' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 01:59:01 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:01 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:59:01 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:59:01 np0005539508 podman[272069]: 2025-11-29 06:59:01.442955389 +0000 UTC m=+3.397025496 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 01:59:02 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:59:03 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:59:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:59:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:59:03 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:03 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:59:03 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:59:03 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 01:59:04 np0005539508 podman[272678]: 2025-11-29 06:59:04.162695362 +0000 UTC m=+0.124365403 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 29 01:59:04 np0005539508 podman[272679]: 2025-11-29 06:59:04.185146286 +0000 UTC m=+0.139045682 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 01:59:04 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15285 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:04 np0005539508 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:59:04 np0005539508 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:59:04.348+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 01:59:04 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256323056' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258564348' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 01:59:04 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 01:59:04 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398999456' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 01:59:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25145 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:59:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:05.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:59:05 np0005539508 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 01:59:05 np0005539508 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 01:59:05 np0005539508 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25154 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 01:59:05 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543064333' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:05 np0005539508 podman[272900]: 2025-11-29 06:59:05.872520719 +0000 UTC m=+1.180436541 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:59:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:59:05 np0005539508 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 01:59:06 np0005539508 podman[272987]: 2025-11-29 06:59:06.027103451 +0000 UTC m=+0.116861715 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:59:06 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25285 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:06 np0005539508 podman[272900]: 2025-11-29 06:59:06.14052349 +0000 UTC m=+1.448439302 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 01:59:06 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 01:59:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798126204' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 01:59:06 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 01:59:06 np0005539508 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 01:59:06 np0005539508 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192370045' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 01:59:06 np0005539508 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25202 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
